 Automorphic encryption. My name is Shai Halevian from IBM. It's a three-hour tutorial, but I will make a break after in something like an hour and 15 minutes. So at least I can have a little bit of rest. So let's keep going. Let's start. The thing that we're trying to do here is compute on encrypted data. So for example, it would be nice if I could encrypt my data before I send it to the cloud and then still allow the cloud to do things like search that data or edit it or sort it or things of that nature without having the cloud, without having to send the data to me from the cloud every time I want to do any operation. I want to cloud to be able to work on it while it's an encrypted form. Similarly, wouldn't it be nice if I was able to encrypt the queries that I sent to the cloud? So now the cloud has its own data, but I have my queries and I don't want anybody to know what they are. So I'm going to send them encrypted to the cloud. The cloud would then be able to process them even though they're encrypted and returns the answer to me also encrypted and then I can decrypt it and recover what the answer was. So here's an example. Suppose I wanted the drive directions from LAX to the University of California Santa Barbara. So I have my query, which is just a string really. I have my secret key. I can encrypt my query and I'm sending it to the cloud. And the server on the other end, all it sees is a string of gibberish and it doesn't really know what it is. Nonetheless, I want the server to be able to work on this string of gibberish and produce another string of gibberish, looks just as intelligible as the first one, but such that if I then decrypt that later string, I actually get my answer. So in this case, the answer was, I don't really know what this University of California that you're talking about, but I do know about Santa Barbara, so here's the map. But whatever it is that the cloud would have returned to me if it was working it in the clear, this is the thing that I want to get at the end of this process when I decrypt it. So the way this tutorial is going to be organized, there are two parts. There's going to be a 10 minutes break in between. The first part is quite high level. You see a fair number of pictures and animations on the slides and things like that. This would essentially cover the blueprint from the original Gentry 2009 paper. Then there would be a break, and then there was the second part, which is more algebraic, less pictures and a little more formulas on the slides. And that part I would cover, the newer constructions, essentially the constructions that happened April this year and on. So the last three months or so. So before we start, let's set up some notation so that we have things to talk about. We're talking about an encryption scheme. An encryption scheme has three components. There was a generation of keys, then encryption and then decryption. Throughout the tutorial, the plain text space is always going to be zero and one. So we're always going to encrypt individual bits. Each cipher text will contain a single plain text bit in it. The key generation just gives us a public and a secret key. The encryption takes the public key and a bit and gives us a cipher text. The decryption takes the cipher text and the secret key and recovers the bit in it. The notion of security that we will be interested in is semantic security. This is the Goldasser-Micali notion, which essentially means that the distribution of public key and encryptions of zero is indistinguishable from the distribution of public keys and encryptions of one. We're indistinguishable, I mean, by efficient algorithm, polynomial time algorithm. Homomorphic encryption, also called fully homomorphic encryption, is the same thing, except now we have one additional procedure. It's an evaluation procedure. The evaluation procedure takes a cipher text, or maybe a vector of cipher texts, and a description of a function f in the public key. And it gives us back another cipher text, such that if we had some string x encrypted inside of the original cipher text c, oops, the original cipher text c, and we got the evaluated cipher text c star, so when we decrypt c star, we're going to see f applied to x. So again, we encrypt x, we process it with the description of f, we get another cipher text, we decrypt that other cipher text c star, and we get f of x. But that's what we mean by evaluating this function on the encrypted data. Some points that I want to make, I do not require that c star would look like a fresh cipher text. Maybe if I take f of s and encrypt it from scratch, I'm going to get something that looks differently or has a different distribution than the distribution that I get by encrypting x and then evaluating f. The only thing I care is that it's decrypt to the right value. The property that I do care deeply about is this property of compactness, which essentially means that decrypting c star should be easier than computing f. Because the solution that I want to avoid is that the evaluation procedure, all it's going to do is add the description of f to the cipher text and then tell the decryption procedure, well, if you see an f after you finish decrypting it, evaluate that f for me. That's not a very meaningful notion, and we want to rule that out. Formally, the condition that we will set is that this length of c star is independent of the complexity of f. So even if we have very complex function f, we want to have a short cipher text at the end of it. That's called the compactness property, and this is how we are going to formulate the requirement that decrypting is easier. But as we will get to the constructions, you'll see that decryption of an evaluated cipher text is going to be essentially the same as decryption of fresh cipher text, even though their distributions are not exactly the same. So the possibility of having an encryption scheme like that was observed shortly after the invention of public ectopography by Rivest, Adelman, and the Tuzus, they proposed the notion of privacy or morphism. And the idea is this. Well, we have our space of plain text, and we have our space of cipher text, and we have those one-to-many mappings maybe in one direction. That's the encryption, and many to one mapping in the opposite direction. That's the decryption. And there is some operation that we want to do on plain text. So we would like to have an equivalent operation that we can do on cipher text, such that if we take x1 and x2 and creep them, get c1 and c2, do this strange operation on the cipher text, get another cipher text d, and then the creep that we will get some y, which is equal to x1 times x2, or whatever that star operation is supposed to be. And actually we have examples of encryption schemes that have these kind of privacy or morphism. So if you think of raw RSA, where x is your plain text and x to the power e mod n is your cipher text, e and n are the public exponent and public modulus, then if you take two cipher text and multiply them mod n, what you get is an encryption of x1 times x2. Similarly in the Goldwasser-Mikali quadratic methodology encryption scheme, if you take an encryption of b1 and encryption of b2 and multiply them mod n, what you get is an encryption of the exclusive or of b1 and b2. So we have a few examples of those. And actually we have more. For example, El Gamal is homomorphic with respect to multiplication mod p. The Parierre cryptosystem is homomorphic with respect to addition mod n. There is a cryptosystem from a few years ago by Bonego and Nysim, which can do quadratic polynomials, modulo p. And then there is a work of Ishae and Paskin from a few years ago that can actually evaluate branching programs without making the cipher text grow too large. So we have a few examples of things that can evaluate non-trivial functions on encrypted data. There is a different line of solutions that I will not talk about, I just want to mention. Starting from Yao, which also lets you compute on encrypted data except over there the size of the cipher text does grow. Nonetheless, it gives you things that the trivial solution from before doesn't give you. And there's a variant of it that does NC1 circuits also by Sanders Jung and Jung. But these all do not satisfy a requirement for compactness. So we're not going to be doing that. What we really want is something that looks like that. It would be really nice if we had the plain text space being z2, so bits. And we're going to have our cipher text live in some algebraic ring R that has its own addition and multiplication operations. And have this property so that if we take two cipher text and just add them in this algebraic rings, what we are going to get is an encryption of the addition of the two plain text bit. And similarly for multiplication, we would really like to have something like that. If we could get something like that, then we can evaluate any function. Because any function you can write as a Boolean circuit. Any Boolean circuit you can write as a polynomial. Once you can add and multiply, you can compute polynomials and you're done. I want to stress that this is a particular way of doing homomorphic encryption with the ring and everything. And we're not going to get exactly that. But it's a good motivation. It's a good thing to keep in mind. And we will get something quite similar at some point during the talk. So with that, I'm going to start the first part, which is covering the Gentry 2009 blueprint. And this blueprint is how you can evaluate any function of your liking on encrypted data in four easy steps. The first step is you come up with encryption scheme based on linear error correcting code. So just for the purpose of this tutorial, ECCs are not elliptic curve cryptography. These are error correcting codes. And if you have encryption that builds on these linear error correcting code, you get additive homomorphism almost for free. The second step, you're going to take these error correcting codes and you embed them in some algebraic ring. You're just going to have a code that happens to live in an algebraic ring. And since the ring also has multiplication, then you get additions and multiplications, but only for a few operations. So you are able to evaluate some polynomials if they have low enough degree, but not too much. That's nice. You can do many things with it, but you can't do everything. The third and crucial step of that is a bootstrapping procedure that allows you to take an encryption scheme that support homomorphism for a few operations, but not too few, and bootstrap into an encryption scheme that allows you to do any number of operations. And then step four is everything else that you need to do in order for the first three steps to work. All kind of fun activities like squashing and other nice things. Let's start. Encryption from linear error correcting code. The underlying observation is, or assumption really, is that for random looking codes, it's very hard to distinguish points that are close to the code from points that are far away from the code. So in this two-dimensional thing, the codewords are the red dots in the middle. So here it's actually quite easy to distinguish points that are close to the code from points that are far away. But believe me, if I add 500 more dimensions to this picture, then it would be hard to distinguish things that are close to the code from things that are far away. And actually, we already have many cryptosystems that are built around this harness. Arguably, the McLeese cryptosystem from 78 already is. And encryptions came that based on that principle. And then there are many lattice-based cryptosystems like the Itai Dwarl cryptosystem, the GGH cryptosystem, encryption by Regev and many others, actually. They're already based on this idea. How do you use it for encryption? So here is how you use it for encryption. In your key generation, you just choose a random code. But it has to have this property that it has two different representation. There's a good representation of the code that allows you to correct errors. And then there's the better presentation of the code such that if you only know the better presentation, you cannot distinguish things that are close to the code from things that are far away. And then encryption of 0 will just be a point close to the code. And encryption of 1 would be a random point in your domain, which with high probability would be far away from the code. And if you know the good representation, if you have the secret key, then you can take the ciphertext, try to correct errors. And if you succeed, then you see if it's close to the code world or not. If you fail, then you know that it was far away from the code. So that gives you a way to this. If you know the secret key, you can distinguish as 0's encryptions of 0 from encryption of 1. And if you don't know the secret key, you cannot by our assumption that it's hard. Here's an example. Think of the integers mod p. So this is a code in R or in Z over the integers. The code is determined by a secret integer p. And the code words are all the multiples of p. And the good representation of the code is p itself. So it's a code. It's an additive code. If you take two multiples of p and add them, you get another multiple of p. And clearly, if you know p and you're given some number x, you can tell whether x is really, really close to a multiple of p or not. And think of really, really close at this point as being within squared of p of a multiple of p. We're actually going to use things that are even closer. But never mind that. The better presentation in our case can be, for example, I'm giving you one multiple of p, n, which is equals to p times q, and many other near multiples of p. So I have this one exact multiple of p. And then I have all the others that I pick from this green intervals. So I'm going to give you many, many of those. And then if you want to encrypt a zero, then you want to come up with a word that is close to the code. What you can do is take a subset sum of these xi, modulo n, and you can add a little bit of extra noise to it. So if you think about all of these operations as happening mode p, you will see that what you get is something where you just add all the little distances in the xi's and then the little bit of extra noise that you've chosen. And if adding all of these little bit of things still keeps you fairly close to a multiple of p, then you're fine. This is an integer close to a multiple of p. And for encryption of one, you're just going to choose a random integer mod n, which with high probability will not be that close to a multiple of p. Actually, for us, it would be better to have a different way to encode your plain text inside the ciphertext, encode your input. And actually, we're going to choose both encryptions of zero and encryption of one as integers close to the code, integers close to multiples of p, except the distance from our ciphertext to the nearest multiple of p would be either even or odd. And this is how we're going to encode our plain text. So the plain text is always encoded in the noise. So if you think of error correcting code, then you may need to shift your thinking a little bit. We do not encode things in the code world. We encode things in the distance to the code, in the noise, actually. Security of this thing is a completely equivalent to security of the previous version that I described on the previous slide, as long as p is odd. And that's easy to see. And I'm going to show something similar sometime in the second half of the talk. I'm not going to try to prove this. So let's just look at the example of integers mod p. So encryption of a bit b, in this case, would be you take a subset sum of the xis and add your noise. That gives you a point close to the code. You multiply it by 2. That gives you an integer close to the code. But now the distance from the code is guaranteed to be even. And then you add your bit. So if your bit was a 0, then you have an even distance to the code. If your bit was 1 and you had odd distance to the code. And this thing, you set the parameter so that that thing, the noise, is still much, much smaller than p over 2. Decryption, in this case, would be just you take c mod p. That gives you the distance between c and the nearest multiple of p. And by c mod p, I actually mean taking it modulo into the symmetric interval between minus q over 2 and plus q over 2. And then you take that integer and reduce it mod 2. You just look at the least significant bit. That's your decryption. This is an instance of the encryption from linear error correction code. So why is this additively homomorphic? Well, the reason it's additively homomorphic is because the code is linear. And when you add code words, you get another code word. So take two ciphertexts and add them. That means adding the code words and adding the noises. Adding the code words give you just another code word. Adding the noises give you something which is still quite small. So if each one of these things were, let's say, squared of p, then the sum would be twice squared of p, which is still much, much smaller than p. And if the noise, if the yellow part, is still much smaller than the minimum distance of the code over 2, then that yellow part is the distance between the code word and the nearest multiple, and then the code word and your ciphertext. And therefore, when you take that distance mode 2, you get your plaintext. So you get additive homomorphism as long as things remain close to the code. So it's actually useful to think of the mode p example, because for other codes, I didn't say exactly what it means to reduce things mode 2. But it usually is fairly easy to figure out what does it mean to reduce things mode 2, even if they're not integers. For example, vectors, you can reduce mode 2, et cetera. So this gives us step one. Now we know how to build encryption schemes from error-correcting codes, and we know that if the error-correcting are linear and have this property of good and bad representation, then we get additive homomorphism. But now we want a special kind of codes that live inside an algebraic structure called the ring, where you have both addition and multiplication. So suppose you have that. The encryption scheme would be exactly the same, except now you also have a well-defined operation of what does it mean to take 2 ciphertext and multiply them. So what happens when you multiply 2 things? Well, each one of them is of the form a codeword plus noise. So when you multiply them, you can open parentheses, and then you get the first codeword times something, plus something else times the second codeword, times the product of the two noise components. Now if it's so happened that a codeword times something and a something time codeword is also in the code, this is something that you would expect if this was a linear error-correcting code, because when you add codewords, you get another codeword. But here you're talking about a different operation. This is multiplication inside the ring, whatever that means. So we want to make sure that this is still in the codeword, and for that to happen, the code has to be an ideal in that ring. I don't know if it has to be, but this thing does happen when the code is an ideal in the ring. So we need the code to be an ideal in the ring, and we also need the property that when you take these two noise components, which were individually small, and you multiply them, you get something else that's also small. So that's a property of the ring. There are rings where you have that guarantee that if you take two things that are fairly small in themselves and multiply them, you still get something which is small. So if you have these two properties, the ring maintains small things, and the code is an ideal, then the distance between the product of the two cipher text and the code is still going to be the product of the two noise terms. And when you take the product of the two noise terms and reduce them more to the only thing that's left is b1 time b2. So now you have several requirements of how the code behaves and how the ring behaves. But if all of these requirements are satisfied, then you can add cipher text, and you get an encryption of the sum of the bits, and you can multiply cipher text, and you get an encryption of the product of the bits. So let's go back to our example. The integer is mod p. This is, by the way, the example of Van Dyke et al from Eurocryptop last year. The secret key is this integer p. The public key is this multiple of p called n, and all the x i's that are near multiples of p. And then the encryption of a bit is a number close to a multiple of p, where the distance is either even or depending on your bit. The encryption is ci mod p mod 2. And if you add to cipher text, then you get a multiple of p, where this multiple depends on the mod n operation and these two multiples that you started from. So it's a multiple of p plus some noise, which, when you reduce mod 2, gives you b1 plus b2. And similarly, if you multiply to cipher text, then you get a multiple of p plus noise. And if you set your parameters right, then this yellow noise is still going to be much, much smaller than p over 2. So now we want our noise to be even much smaller than square root of p, let's say cubic root of p. So then that thing would still be much smaller than p over 2. And therefore, this would be, you don't have a wraparound. This is still the distance to the nearest multiple of p. So you get an additive and multiplicative homomorphism as long as the noise is smaller than p over 2. So that's a one-slide description of the somewhat homomorphic encryption from Van Dyke et al. So let's summarize what we need up to now. We need a linear error correcting code c that has good and bad representations with this property that good representation lets you correct or bad representation doesn't let you distinguish far from near. We need the c to live inside an algebraic ring. R, we need c to be an ideal in R. And we need the sum and the product of small elements in the ring to still be small. And actually, we do have these structures. Usually, they live in a Nuclidean space. So for example, the integer mod p live inside R1, the reals. And you can find structures like that also in higher-dimension Nuclidean space. They're usually associated with some lattices. And if you find this structure and you have these properties, then you get a somewhat homomorphic encryption scheme. It is an encryption scheme. It secures an encryption scheme. And it lets you add and multiply ciphertext inside that ring. And as long as you remain close to the code, you can still add and multiply. But the distance keeps growing. At some point, the distance will be large, and you will not be able to do any more operations. Some instantiations of this. So the original Gentry paper had an instantiation based on polynomial rings. The security was based on hardness of bounded distance decoding in ID lattices. Bounded distance decoding is essentially error correction in ID lattices. Then there is the integer ring that I used as a running example, where security was based on the hardness of approximate GCD problem. It's essentially the problem of, given the public key, find the secret integer p. And depending on how you set your parameters, it could be either something that we know is easy or something that we don't know how to do, so we can assume it's hard. Then there was an attempt. We had an attempt to try to do the same using matrix rings. But that didn't really work. Matrix multiplication is not, you get an ideal which is either a left ideal or a right ideal, but not both. So we were able to show how to do quadratic polynomials using that, but not more. And I think tomorrow or on Thursday, I forget, you're going to hear yet another instantiation. You can cast it as belonging to that framework. By Brokersky and Rekuntanathan, that also uses polynomial rings. And the security there is based on ring LW. So we're done with step one. We're done step two. And now, finally, we can evaluate low degree polynomials. So if we have a polynomial p and we have encryptions of x i's, so these things inside boxes here are encryptions, then we can come up with this complicated apparatus such that it would let us take the encryptions of the x i, process them, and get something else, which is an encryption of the polynomial evaluated on the x i. Now, the original things, however, were green. They had very little noise in them. Once you do the evaluation, the noise grows and you get an orange ciphertext that already has more errors in them. So you can still decrypt it. I mean, it's still orange. But if you try to evaluate on it anymore, it's going to turn red. And then it has too much noise and you won't be able to decrypt it anymore. You're going to get a wraparound, if you think of the mode p example. And an encryption scheme that has this property, we're going to call it a somewhat homomorphic encryption scheme. And then we're going to use bootstrapping in order to handle higher degrees. What we have here is an orange ciphertext. And it would be nice to be able to come up with another ciphertext, maybe also an evaluated one, but one that has less noise in it. If we somehow were able to take that encryption of y and come up with a different encryption of the same y that has less noise in it, that would be nice. Because then we can do a little bit more work on this yellow y until it turns orange and then apply our noise reduction again. So this is what we may want to do. And you're going to use bootstrapping in order to do that. So think of one fixed ciphertext c for now. Think of it as an orange ciphertext. And consider the following function. It's a function that has this ciphertext hardwired in. And what it does is it takes its input, its argument. Think of it as a secret key and try to use that secret key in order to decrypt this fixed ciphertext. So our input now is an alleged secret key. C is just hardwired inside the function. So the first thing I want to say about this function that is well-defined. This is a well-defined function. And since it's a well-defined function, you can write it out as a polynomial. And you can hope that this polynomial has low degree. So let's keep with the hope for now. If this function is indeed a low degree polynomial, then one thing that we can do is we can add to the public key also an encryption of the secret key. This is not something that we usually do in encryption scheme. You usually don't want to encrypt the secret key under its own public key. But this is what I'm suggesting that we'll do here. And it does require extra assumption on the encryption scheme, specifically the assumption that doing something like that doesn't break your encryption scheme. It's called circular security. We usually don't know much to say about circular security. We have very few encryption scheme where we can prove that it works, that it remains secure. But in most crypto systems, we don't know how to break them either. So we might as well assume that it's hard. Now we have our ciphertext. And it's an orange ciphertext. And we want to decrease the noise in it. So what we're going to do is we're going to look at this as our fixed c and write the description of the function d sub c as a polynomial. Now that we have a polynomial, we can build this apparatus for it. And we have the encryption of the secret key bit. So we're just going to pass them through this apparatus and evaluate the function d sub c on the encryption of the secret key. What we get is an encryption of what you would get by d sub c of secret key, which is just the decryption of your original ciphertext. So we got now a new ciphertext encrypting the same y. And if that the degree of this d sub c is low enough, then maybe that ciphertext now has less noise than the orange ciphertext that you started from. So this is bootstrapping. The thing to stress here is that the homomorphic computation is applied only on the fresh encryptions of the secret key bits. You never actually try to multiply or add these orange ciphertexts. The only thing you're multiplying and adding are these green ciphertexts, the encryption of the secret key bits. Similarly, you have two ciphertexts that are both orange and you want to multiply them. So define the function m sub c1 c2 that has c1 and c2 are hardwired in. It takes as input an alleged secret key, try to use it to decrypt c1, try to use it to decrypt c2, and then multiply the results. Again, a well-defined function. We can write it as a polynomial. We can come up with this apparatus for the polynomial. We'll take the secret key bits. We pass it through that polynomial. We're going to get an encryption of m sub c1 c2 applied to the secret key, which, lo and behold, is an encryption of the two things that are encrypted inside the original orange ciphertext. And we did all of that without ever having to compute on these orange ciphertexts. We just use them in order to write the description of the function m sub c1 c2. That's the only use that we made. So this is the bootstrapping transformation, probably the single most beautiful thing that I would cover in this tutorial. So before I go on, I'll pause for a second, read the slide again, and then I'll do it again. So we have two orange ciphertexts. Oh, pause, yeah. We have two orange ciphertexts encrypting y1 and y2. We're not multiplying them. We take them. We use them to define the function m sub c1 and c2. We build this machinery that can evaluate m sub c1 c2 on encryptions. We apply that to the secret key bits that we have in our public key, the encrypted secret key bits that we have in the public key. We get out a new ciphertext that encrypts m sub c1 c2 applied to the secret key. And by definition, m sub c1 c2 applied to the secret key is the product of the two things that were hidden inside our original ciphertexts. And we did all of that without ever processing the orange ciphertext. So that's bootstrapping. So then comes step four. And in step four, we discover, unfortunately, that none of the instances that we had before, the Gentry one, the Van Dyke et al one, the Berkarsky and Kutonatan one, none of them actually is able to evaluate their own description. I mean, they can evaluate load-the-grip polynomials. Decryption is a load-the-grip polynomial, but not low enough. So you use all kind of tricks to squash the decryption circuit. You do all kind of things in order to make decryption simpler. You want to have the same encryption scheme with the same ciphertext, but you want to be able to compute decryption simpler. So you add more things to the public key that give you some additional hints about the secret. Can you assume that that doesn't break your crypto system? And then you post-process the ciphertext. And once it's post-processed, decryption becomes a simpler operation, and then it's low degree enough and you can do bootstrapping. So this is essentially what you do. I'm not going to talk about how you do that. It's not trivial, it's very technical. The one thing that I will say about it is that it requires that you make yet another assumption in order to say that the things that you added to the secret key, to the public key don't hurt you, then you need to assume that the sparse subset some problem is hard. Yeah, I'm running way ahead of the time where I thought I'll be now. So, okay. I'm going to try to talk a little slower now. Let's talk about performance a little bit. So the underlying semi-tomomorphic encryption may be reasonable in terms of performance. I mean, depending on what you want to do for several applications, it takes you something like maybe a second to 10 seconds to do a single multiplication. That could be a reasonable price to pay depending on what you're trying to do. So if all you wanted is to evaluate low degree polynomials, then you can use the semi-tomomorphic encryption as is and you may get reasonable performance. Bootstrapping, however, there's something inherently slow about bootstrapping. Why? Every time you want to do a single operation, you have now these orange ciphertext. You want to do a single operation, the entire decryption procedure on encrypted data. Now, because of all kinds of reasons, the decryption cannot ever be very, very fast. So the minimum thing that you can expect to do, at least using the techniques that we knew up to a few months ago, is gives you overhead of something like security parameter to the power 3.5 and then some polylog factors after that. And if you want to know practical terms, then the best implementation, as far as I know, is the implementation of Craig on mine that we reported in EuroCrip this year, where we took a variant of the original Gentry cryptosystem and implemented it. The public key size is roughly two gigabytes, so it does fit on a one DVD, typically, maybe. And bootstrapping takes you somewhere between three and 30 minutes on a fairly strong machine. So it is implementable, which was sort of a surprise to us when we got there, but it's not something that you can actually use. It's very hard to think of an application that can wait half an hour to evaluate a single gate. There are similar performance also in a paper that you'll see, again, I don't remember if it's tomorrow or Thursday. By Corona at all, they were implementing the scheme from Van Dyke at all from EuroCrip last year, but it may be that their parameter choice was a little too aggressive, so there is a new thing that claims that their parameters are not conservative enough, and if they need to ramp up their parameters, then their times would be even worse than this half an hour. So with that, I'm actually done covering the Gentry Blueprint. I thought that would take an hour, but it took only 40 minutes. So what I'm going to do is I'm gonna do one more slide, then take the break, and then do the rest. So here's one thing that I want to talk. I'm just gonna spend one slide talking about it. Chimeric homomorphic encryption. This is a new work with Craig. Well, essentially what we still wanna do, bootstrapping and everything, but we don't like the subset sum assumption, and we wanna know whether we can do without it, and the way we handle it is we take a hybrid of a somewhat homomorphic encryption and a multiplicative homomorphic encryption. Well, multiplicative homomorphic encryption is something like Elgamal, where you can multiply ciphertext as much as you want, but you can't add them. Chimera, by the way, is a mythological creature made up of parts of different animals, and that cryptosystem sort of looks similar because you take your sum of homomorphic and you stack on its back this multiplicative scheme. The very high-level idea for this encryption scheme is you express the encryption as a restricted arithmetic circuit. It's an arithmetic circuit with high fan in that has a lower bottom level of addition gates than a level of multiplication gates and then one addition gate at the top, and it's restricted in the sense that for our purposes, what we want to do is to use the multiplicative homomorphic encryption to evaluate the middle layer. In order to do that, we need to have Elgamal ciphertext that encrypts the things that need to go on these wires, and the circuit is built in a particular way so that we can pre-process all of that. We can put a whole bunch of ciphertext in the public key, Elgamal ciphertext in the public key, and then depending on the thing that we try to bootstrap, depending on our orange ciphertext, we choose which of them we wanna put on these wires, and that's it. So that lets us switch to Elgamal encryption. Temporarily do the multiplication, now it's not a low degree anymore. It's a fairly high degree, but Elgamal doesn't care. It can just multiply as much as you want, and then here we need to switch back to the additive homomorphism, and for that we use a bootstrapping-like approach. We're gonna evaluate the Elgamal decryption homomorphically. The difference is now the somewhat homomorphic encryption needs to evaluate Elgamal decryption and not its own decryption. So the thing that you can do is start with whatever security parameter that you want for Elgamal, and then make the parameters for your somewhat homomorphic encryption large enough to evaluate that. So that saves you from the circularity of the encryption having to evaluate its own decryption, which is why you can get away without squashing and without the sparse subset sum. I know that that's not a very, very descriptive thing, but there is a paper on Eprint that you can read if you're interested, and I don't want to go into details on that because I want to go into many more details on other schemes. So with that, I'm actually gonna take an early break, so we'll come back here at three, and then I'll do the other part, which will not take two hours, I really hope.