 So, I'll talk about fully homomorphic encryption without modules switching from classical GAP SVP, and we only have five minutes until lunch, and I'm kind of hungry, so I'll try to make it quick. So let's get going. So I usually think about homomorphic encryption in relation to this problem of outsourcing computation, so we have our smartphone, and we want to run all sorts of apps or computational tasks on the smartphone, but the smartphone is not really powerful enough in order to run all these things. Web search, we're not going to search the web, crawl the web with our smartphone, so we outsource this computation to a cloud server, and this can be modeled by having some input X on the smartphone, and some function F that our cloud server knows how to compute, and we want to obtain F of X, and the way we usually do it is just by sending our input, I don't know, our location and our destination to the server. The server computes the root and sends us back F of X, and this is great, this is how things work, but what if we want our input to be private? So for example, we don't want the server, which could be nosy, to know our location and our destination. So our goal is to outsource computation in a private manner, and we want to allow the server to compute for us in a blindfolded way, so without knowing what it's computing. So instead of sending our input in the clear, we're just going to encrypt it and send it to the server, and this will guarantee just by the security of the encryption scheme that the server learns nothing about the input. But that kind of defies the purpose because how is the server going to compute F of X if it doesn't even know X? So the server is not going to be able to compute F of X, but rather we're going to require that the server computes some value Y, such that this value can later be decrypted on the smartphone to obtain F of X. So this Y, in a sense, is an encryption of this value F of X, and this is what homomorphic encryption is all about. So homomorphic encryption is this process that allows you to have an encryption scheme, which is semantically secure, and yet, given the description of a function F and an encryption of all the bits of your input, X1 up to Xn, a server which doesn't know what's encrypted inside these ciphertexts can still compute an encryption of F of X. And we want to achieve this for any possible function F, and it's fairly straightforward to see that it's sufficient to actually get it only for binary addition and multiplication, because we can just write our function as an arithmetic circuit over GF2 and just evaluate it gate by gate. So if we get homomorphic evaluation of addition and multiplication, then we get homomorphic evaluation for any function. So this is our goal. So let me tell you what we know. So we saw the old days, and like Nigel said, the old days are 2009 until 2011, so really ancient history. So Gentry showed the first candidate a fully homomorphic scheme back in 2009, and he actually showed more than that. He showed an outline or a blueprint on how to achieve fully homomorphic encryption. And he did the first follow-ups, follow this blueprint, and sort of instantiated some of the building blocks under different assumptions. An additional scheme by Gentry and Halevi that they called Chimeric, fully homomorphic encryption, showed how to sort of deviate from this blueprint and remove one of the assumptions or the building blocks that Gentry originally had, but at the cost of really complicating the scheme. And I really think that we don't really fully understand this Chimeric, fully homomorphic encryption scheme, but it has some nice idea, but it's very complicated. An additional line of work dealt with trying to make this thing more efficient, more closely to being useful in the real world. So these were the old days, and then there's the newer schemes that started in work with Vinod Vaikunthanathan, where we show the scheme that was based on the learning with errors assumption. So learning with errors is related to the problem of approximating short vectors and elattices, and this is sort of a better assumption than the assumptions that were used in previous schemes. And in particular, you don't need to assume hardness of ideals, which was, I guess implicitly assumed, it was assumed essential somewhat in the previous schemes, but this shows that you don't really need these hardness of ideals. And perhaps, as important, the scheme has a cleaner representation. It's sort of simpler to explain, and this also led to some efficiency improvements. However, shortly after we presented this scheme, there was an improvement in work with Gendry and Vaikunthanathan. So this was the basis to the implementation that Nigel just presented. And let's use this new magic called module switching. So module switching, I'll say what it is later. But just like that, without doing much, it allowed it to really improve the performance of the scheme. So you got a better assumption. The assumption was still learning with errors, but with much better parameters. You could get something that you couldn't get before, leveled homomorphism without bootstrapping. So if you don't know what it is, it's sufficient to know that you couldn't get it here and you can't get it here. And also, actually, that's somewhat unrelated to module switching. There was also this notion of batching, again, that Nigel mentioned, that led to a sequence of works on improving the efficiency, one of which we just saw. So I'm not going to care about efficiency. All I care about is showing you sort of the simplest homomorphic encryption scheme that I can. And again, my message is this module switching, this magic. It gives us things for free. And what I want to say is this. So module switching is a red herring, in a sense. You can get the scheme that gives you everything that module switching gives you, and actually a little more, without doing module switching at all. So if your scheme works sort of in the right scale, or rather, if you bring your scheme to a state where it has no scale, then you can get everything that module switching gives you without really doing it. So you get the same and little more, in fact, with less headache. So let's see how we do it. So I'll start by describing the scheme in this BV paper. And the scheme works as follows. So your secret key is going to be an n dimensional vector over ZQ. ZQ is going to be a modulus that is going to be unspecified for the most part of the talk. And actually, I don't want to think about ZQ in sort of the rigorous mathematical way as a ring of integers, modulus, some ideal or something. Just think of ZQ as the integers in the segment minus Q over 2 until plus Q over 2. This is actually going to give a better intuition as to what's going on. Our cyberdeck is also going to be a vector over the same space. And the property of the encryption scheme is that if I take the inner product of a ciphertext and the secret key, then what I actually get is the message M that is encrypted by the ciphertext C plus two times small noise. So this is a small even number plus Q my modulus times an integer i. So if you take this thing mod Q, then what you get is that this inner product equals to my message M plus a small even number. So how do I get ciphertext that sort of adhere to this equation? It doesn't really matter. If you saw it before, then you know. If you don't just believe me that this is actually possible, and you can even do it in a public key way. So you don't need to know the secret key in order to generate, in order to encrypt the message M such that this equation hold. So encryption is known and the encryption really follows immediately from this equation. So in order to decrypt a ciphertext C, I'm going to compute. Is this thing working? Can you even see the me pointing? No, I think it's kind of dead. The other one? Oh, this thing? Okay, good. We're in business. Okay. So in order to decrypt, that's actually more convenient. Why didn't I start with this? So I'm going to compute this inner product, this C inner product with S. I'm going to take it mod Q. So this is going to give me my message M. So M is just one bit. It's either 0 or 1 plus 2 times small noise. And I'm going to take that mod Q, which will give me my original message M. And this is true so long as my noise is not too big so as to cause a wrap around over Q. So so long as the ratio between the absolute value of the noise and Q is smaller than one quarter, then I get correct decryption. And indeed, I'm going to set my encryption algorithm so that my initial noise is going to be smaller than some bound B, which is much, much smaller than Q. It's going to be alpha times Q for a tiny alpha. But this noise bound is actually going to grow as I do more homomorphic operations. So we need to make sure that we keep our noise smaller than this bound. So the scheme I just presented is going to be secure under the LWE assumption with dimension N modulus Q and noise rate alpha. So really, we don't care about what these parameters mean. The only thing that I want to say is that the bigger alpha is, the more secure the scheme becomes because you add more noise so things become more secure. Okay, how do we get homomorphism? So remember, we need to get homomorphism with respect to addition mod 2 and respect to multiplication mod 2. So out of the homomorphism, I mean, if you saw any talk about it, it all looks the same. You just take two Cyphertex C1 and C2, C1 encrypts M1, C2 encrypts M2. You add them together, obviously from this equation, you're going to get an encryption of M1 plus M2. So not much to do with addition and always the challenge is multiplication. And multiplicative homomorphism is done by, well, multiplying the Cyphertex and specifically, we're going to use tensor product. So we're going to take the tensor product of these two Cyphertex, mod Q. And tensor product is going to produce, so tensor product is actually the vector that contains all of the cross terms of the Cyphertex C1 and C2. So each of them has n elements and I'm going to take all the possible cross terms so I get a vector of the dimension n squared. And I claim that this very long vector actually encrypts the message M1 times M2. Why is that true? So I'm going to say that if you decrypt this Cyphertex using an appropriately long secret key, so the secret key is going to be the original secret key tensored with itself, then you're going to get M1 times M2. So this doesn't really go all the way because we got something that may encrypt M1 times M2 but under a different secret key. However, in this BV paper, it is shown how to get back to the original secret key. So this is a solved problem. I'm not going to waste time on it. So why is this true? When I compute the inner product of the tensored Cyphertex with the tensored secret key, then just by the definition of tensored product, I get the product of this inner product C1 inner product with S times C2 inner product with X. So I'm just going to assign this expression back into the equation at the bottom and I'm going to get that mod Q. This product equals to M1 plus 2 times E1 times M2 plus 2 times E2 and just by opening the parentheses, I get M1 times M2 plus 2 times something that is in the order of E1 times E2. So indeed, if I apply my decryption algorithm, I will get M1 times M2, but my noise will grow. So the noise magnitude used to be B, but now it's going to be something like B squared. And this can be okay in the beginning when B is small, but if I do it a number of times, then my B is really going to blow up. If I do it D times and each time I noise squares, then after D times, I'm going to get to a noise that is something like B to the 2 to the D. And it's not going to be long until I lose the cryptability. So this is where module switching comes into the play and module switching actually says that you can bring the noise down simply by dividing the entire ciphertext by a factor B. So we had noise B, we went up to noise B squared, and now we're just going to divide everything by B and the noise will go back down to B. So this is kind of a stupid idea, but surprisingly it works. So I'm going to start with a ciphertext and the ciphertext is going to be over ZQ and has noise bound B squared. So this is what we got at the end of the multiplication. I'm going to divide by B and I'm going to get a ciphertext that lives now in ZQ over B. So my modules actually got smaller. This is why we call it module switching. But the noise also got divided by B and now my noise bound is just going to be B. So I went back to the original noise bound but at a cost of reducing the modules. And of course we need to be careful so that this division does not harm the message bit. We don't want to lose the information here. And this is why the special form of rounding here is used in order to make sure that your message is preserved. So let's see how it helps us. So let's see how the noise and the modulus evolve as we perform multiplications in this using module switching. So we start with noise B and modulus Q. After one multiplication, well we go back to noise B but our modulus goes down to Q over B and then we keep going. So our noise will always remain B after each switch but our modulus is going to go down to Q over B to the D. Which means that if we want the cryptability then we want B to the D plus 1 to be smaller than Q over 4. And this may still be pretty bad but it's much better than what we had before. Before we had B to the 2 to the D and now we only have B to the D. So this is a great improvement. But I still wasn't happy with module switching and I had two reasons why kind of module switching bothered me. So first of all, module switching is scale dependent. So if I scale both B and Q down by say a factor of 2 then things actually improve because my homomorphic properties depend on this expression and if I scale both B and Q down then this expression improves and this shouldn't happen because I don't know if I take a step back then B and Q seem smaller from here but the scheme remained the same. It shouldn't change. So aesthetically it seems like it shouldn't matter. The second reason which is perhaps related is that thinking about it, what module switching really does is this. Nothing. No, really. I might as well have added another scaling factor during the tensoring process. So instead of taking C1, tensor C2 I could have multiplied it by some tau that would reflect the fact that I'm scaling down my ciphertext. So module switching itself shouldn't really buy me anything and the point is that if I could only get to the correct scale then the scaling factor should have been 1. And this is what I should aspire to. So the solution is giving a scale independent fully homomorphic encryption which looks like this. So not to be scared, I'm going to compare with our previous scheme and this is just taking the previous scheme and dividing the ciphertext by Q over 2. Actually in the paper and the proceedings I got overzealous and divided by Q and not Q over 2 which gives similar results but now I think that Q over 2 is sort of the right term. So this is what I'm going to show. So I'm dividing by Q over 2 and what am I getting? So rather than having my ciphertext being an integer between minus Q over 2 and Q over 2 now it's going to be a real number or a rational number in the segment minus 1 to 1. So it's going to be a vector of elements of absolute value at most 1. And now looking at the ciphertext let's start from the end. So Q times I becomes 2 times I because I'm dividing by Q over 2. This small noise now becomes a much tinier noise. It's going to be an epsilon that is much smaller than 1. The initial noise is going to be proportional to this alpha factor and the cryptability, I'm going to get the cryptability so long as this epsilon is smaller than 1 half. And you may wonder why my M here did not get scaled down. So we know that we can divide without sort of affecting the message bit. I'm not going to get into that but it's not really an issue. You can do that without affecting your message. So this is the scale-independent fully homomorphic encryption and the hardness assumption is essentially the same hardness assumption as before because we didn't do anything. We just took our ciphertext and divided it by Q over 2. So why do I say that this scheme actually makes multiplicative homomorphism sort of easier? So now multiplicative homomorphism, again it's going to be a tensor product of the two ciphertexts, mod 2. And again, I argued that when I decrypt this tensored ciphertext using a tensored secret key, then I get an encryption of M1 times M2. So I'm going to start by breaking the inner product in the same way and again assigning the expressions from above to these parentheses. And it's going to be very tempting to say, well, because this is mod 2, I can just ignore this 2 times I1 and 2 times I2, but actually modular arithmetic does not work the same way for reals as it does for integers. So for example, 1 half mod 2 times 2 mod 2 does not equal to 1 mod 2. So we can't really do that. We need to carry these around and just open the parentheses like this. And what you get is indeed M1 times M2 plus epsilon 1 times M2 plus 2 times I2. And again, a symmetric term with epsilon 2 plus epsilon 1 times epsilon 2, all of this mod 2. So we indeed get M1 times M2, but let's see how our noise got affected. So first of all, this term epsilon 1 times epsilon 2 is now of the order of something like alpha squared. So this is tiny. This is not like before where this term kind of squared things and made things worse. Here, this term is not going to be significant at all, but rather this term is going to be the more meaningful one. And we have something like alpha times the absolute value of M plus 2 times I. Now, in order to bound this M plus 2 times I, we just go back to this equation and see that M plus 2 times I is more or less just the absolute value of this inner product. And since all the elements of the vector C are between minus 1 and 1, then this thing is going to be smaller than the L1 norm of our secret key vector S. So this thing is smaller than alpha times L1 norm of S. And we get that our noise at every point blows up by this term, which is still not good enough because our secret key is a vector over ZQ. So it's elements between minus Q over 2 and Q over 2. And the L1 norm is something like n times Q. But this is somewhat easy to fix using a known trick. So instead of looking at S as n elements over ZQ, I'm going to decompose it into bits. So my new S is going to be just n log Q bits. And I have a slide on that, but not enough time to show it. So it is possible to sort of decompose my secret key into bits and, again, make a change in the ciphertext. So my ciphertext now is also going to be of dimension n log Q, but nothing else really changes. And once I represent my secret key like that, I get the L1 norm is smaller than n log Q. And what you get is that your noise blows up by a factor that is something like n log Q. Or since Q is at most 2 to the n, your noise blows up as at most n squared regardless of the scale. So if you do it d times, then your noise will blow up by a multiplicative factor of something like n to the d. And this allows to use Gentry's bootstrapping and get fully homomorphic encryption. So bootstrapping essentially means that it's sufficient to evaluate circuit of depth something like log d. And this means that you need to take your alpha in order to allow the cryptability for this depth. Your alpha needs to be something like n to the minus O of log n. And this is regardless of the Q that you're using. So regardless of Q, so long as this is your alpha, you're going to get full homomorphism. And in BGV, you could get something similar to that, but only for very special values of Q. And since we can do it for any Q, we can actually get classical hardness reductions for some lattice problems that were only known before in a quantum way. Because this Q actually, taking a large enough Q will actually gives you a classical as opposed to a quantum hardness reduction. So scale and dependence give you fully homomorphic encryption without module switching. The homomorphic properties are independent of Q, but there is some underlying Q that governs the security properties. All the properties of BGV and a little more extend. And hopefully it's somewhat simpler to understand. I will also refer you to a blog post with Buzz Barak that sort of tries to describe the full scheme that is based on this paper. And when people ask me what's a good place to start on homomorphic encryption, this is where I point them. And I even created a short URL for it. And this is really the end of crypto. Thank you and I'm going to put the URLs back.