 Right? All set. So the last talk, sorry, the last paper is written by Pachai Mukherjee and Daniel Wicks and Daniel is giving the talk. All right, thank you. So I'm going to tell you about multi-key fully homomorphic encryption and an application of it to two-round multi-party computation. So probably most of you have already heard or know what fully homomorphic encryption is, but just a very brief review. So we have some party that has a value x and fully homomorphic encryption lets you encrypt this value x just like you would with a standard encryption scheme. So you can choose a public e-secre key, encrypt x, and send this ciphertext over to let's say the cloud. And now you want the cloud to do some computation for you over the encrypted data. And fully homomorphic encryption enables this so the cloud can homomorphically compute any function f over the data by doing some corresponding homomorphic operation on the ciphertext and output some ciphertext c star which decrypts to f of x. So this is fully homomorphic encryption or FHG for short. Multi-key FHG is just an analog of this to the setting with multiple parties. So here we have n parties that have inputs x1 up to xn and they're going to choose independent public e-c individually, they're going to choose independent public e-secre keys, they're not going to coordinate with each other, and each part is just going to encrypt its value xi under its own public key and send these n ciphertexts to the cloud. Yet we still want the cloud to be able to compute on these ciphertexts even though they're encrypted by different parties under different keys. So there should still be some homomorphic procedure that allows the cloud to take these ciphertexts and evaluate it, evaluate and function f on them, and compute ciphertext c star such as c star decrypts to f of x. Okay, but under which key does it encrypt f of x? And if you start thinking about it, it shouldn't decrypt under any individual secret key of any one of these parties because if it did it would mean that that party is learning information about the inputs of other parties which are encrypted under different keys, it shouldn't learn anything about them. Instead, what we want here is that if all the parties get together, pool their secret keys, there should be some way for them to decrypt the ciphertext and learn f of x. Actually, we want something a little more here. We want to make sure that there's some nice distributed way of doing this decryption where the parties don't have to reveal the secret keys to each other. Rather, each party does some partial compute, some partial decryption of the ciphertext, they combine them and they get out f of x. So you want to be able to do this decryption in a distributed manner. So a little bit of background on this problem multi key f h e was originally proposed by Lopez, Ultramarine by Contanathon, and they gave a construction based on the entry assumption, this is something to do with ID lattices, and they didn't have a nice way of doing the distributed decryption. So really, the parties just had to come together and pool all their secret keys to decrypt. Last year at crypto, there was a really nice construction from the learning with errors assumption that had a really good idea in there, but it was a very complicated construction. Essentially, the only way to understand it was just follow like a couple of pages of equations that really wasn't a nice framework for understanding what's what's going on here, and they didn't discuss how to do distributed decryption. So what I'm going to tell you about in this talk is first, I'm going to show you a really simplified construction based on the learning with errors assumption. It's real an adaptation of the GSW, the Gentry-Sahay waters 13 f h e scheme. And there's really a nice framework for understanding this construction. I think it's significantly simpler than what was done in the previous work. And moreover, there's a really nice one round distributed decryption procedure for this construction, and this gives you an important application to multi-party computation. So let me tell you now about this application. So first of all, what is multi-party computation? We have some parties, they have inputs x1, x2, x3. They want to compute some function f over this input. They want to do this by running a protocol together in such a way that you have correctness, every party gets the output and security. Nothing else about the inputs is revealed. And I'm thinking arbitrary number of corruptions in the setting. Okay. So with multi-key f h e, we get a really simple two round protocol for doing multi-party computation. What's the protocol? Each party just chooses its own public key secret key pair, uses the public key to encrypt its input x, xi, and just broadcasts the ciphertext. At the end of this first round, all the parties see all the ciphertext, and each party can just run this homomorphic evaluation procedure and get a ciphertext c star which encrypts the output of the computation under all the public keys. And now in the second round, the parties can just run a distributed decryption procedure to decrypt the ciphertext and get out the answer, get the output. So this is the first actually, oh, so this protocol is secure against semi-honest corruptions. If you want to get malicious security, you have to add non-intractive zero knowledge proofs. It's actually the first two round multi-party computation protocol, at least in the common random string model, that you can prove secure under a nice assumption like LWE, the previous that we only knew how to do this from indistinguishably obfuscation. So let me tell you now about how to actually construct this multi-key FHE. It's a good tool, let's us do MPC, how do we actually get it? And so I'm going to tell you, first I'm going to start out by telling you about the Gentry-Sahai waters fully homomorphic encryption schemes from LWE, and then we'll see how to add a few tricks to make to convert that into a multi-key FHE. So here we have to actually roll up our sleeves and do a little math. So we're going to start with the learning with errors assumption, which is the assumption these schemes are based on. And essentially this assumption says that random noisy linear equations are indistinguishable from uniform. A little more precisely, we're going to take a uniformly random n-by-m matrix, so think of it as a fat matrix B over some some glub ZQ. And we're going to take a random linear combination of the rows of B and add a little error. So we're going to perturb each entry here by some small error and get out this vector little b, and we're just going to put these two things together. Now this matrix over here is statistically far from uniform. A uniformly random matrix, it's unlikely that the last row would be close to linear combination of the previous rows. But the LWE assumption says that nevertheless computationally you cannot distinguish it from a uniformly random matrix. So this is the LWE assumption. And it's as hard as approximated as the shortest vector in worst case analysis. So let's now go to the fully homomorphic encryption scheme. So in the FHE scheme, the public key is just going to be exactly this matrix that you saw in the last slide, this LWE matrix. I'm going to call this whole matrix A. And the secret key is going to be a vector t, which is just negative s, as is the linear combination, and a 1 added at the end. And the main property to remember is that if you take t times a, you just get this error vector e, so you get something that's close to zero. I'm going to use this notation to hide small things. Okay, so t time a is something that's close to zero. Main thing to remember. What about encryption? So encryption works like this. To encrypt a bit x under this public key a, I'm going to take a times r, where r is a square matrix with just zero one entries. All entries are short, just zeroes and ones. I'm going to add the bit x times g, where g is some public matrix. It's a fixed matrix. I'll show you what it is in a second. I'll call it a gadget matrix. I'll tell you what it is. You don't need to know for now. And the main property of the ciphertext is that if you take the secret key t times c, well, I told you t times a was short. r is short. So t ar is all short. You get something close to x t g. So those are the two things to remember. t times a is small, and t times c is close to x t g. So I'll say you can think of a ciphertext c as being good encryption of a bit x under a key t if this property holds. This is what it means to have an encryption of x. And I should say that this property also lets you decrypt by just taking t times c and seeing if the bit x is zero, you get something close to zero. If it's one, you get something that's large. So you can decrypt figure out whether the bit is zero or one. The security is actually really simple. So first of all, I can use the LW... So this is the public key. This is an encryption of x. I can use the LWU assumption to say that this looks indistinguishable from... But I can just replace the matrix a by uniform matrix. And now I can use the leftover hash limit to say that the uniform matrix times r looks actually like another independent uniformly random value. So that's the whole proof of security. What about doing the homomorphic evaluation? So I'm going to show you how to take two ciphertexts that encrypt some bits x1 and x2. So again, that means that this property holds and operate on them to get a ciphertext that encrypts x1 plus x2 and x1 times x2. And if I can do that, I can evaluate any circuit I want. So addition is really easy. I'm just going to add the ciphertext. And you can see that the same property now holds. So t times this new ciphertext is just x1 plus x2 times tg. So I got an encryption of x1 plus x2. What about multiplication? Well, I'd like to multiply the ciphertext, but they're sort of not the right dimensions. They're not square. I can't really do that. So here I'm going to have to tell you what this gadget matrix g is. And we're going to use some properties of it. So actually, I don't really need to tell you what the gadget matrix is. I just need to tell you the property of it. So g is some matrix such that there's an efficiently computable function g inverse, such that g times g inverse of y for any y is just y. Okay, so if you think of this equation, g times equals y, there's many solutions x to this equation. g inverse of y just gives you one of these solutions. And in particular, it gives you a solution which has short entries, just zeros and ones. So just solution exists. And in fact, you can find it efficiently. So g inverse is an abusive notation. It's not a matrix inverse. It's a function, but it sort of acts like a matrix inverse because g times g inverse of y is just y. Okay, and here's the implementation. So the matrix g is the powers of two matrix and the g inverse function is just bit decomposition. I'm going to take the elements, mark you and just write them down in their bit notation. That's the whole idea. So now that I have that, here's how to do multiplication. I'm just going to take the ciphertext c one times g inverse of c two. And if you look at what happens is, if I take t times this new ciphertext, I get t times c one, which is close, which is x one tg plus some error times g inverse of c two, e and g inverse of c two are both short. So this sort of just becomes some short thing. I can ignore that. And g and g inverse c two cancels out. So I get x one t times c two, and now t times c two is close to x two tg. Okay, so by doing this, I get a ciphertext that encrypts the product of the bits. That's it. That's the whole, so this is, you know, so how to do fully homomorphic encryption. That's the whole scheme. The noise grows as I do homomorphic evaluation. So the error sort of in the ciphertext gets bigger and bigger as I'm computing a circuit. That's okay. And if you don't like it, you can fix it with bootstrap. You can also get rid of it with bootstrapping. Okay, so now let's see how to extend. So that's fully homomorphic encryption. Now let's take it and see how to extend it to the multi-key setting. So in this scenario now we have many party and parties with different public key secret keys. Each party has, let's say, secret key ti. And let me call something, the expanded secret key as just being the concatenation of these n secret keys. So just a bigger vector that just concatenates all of them. And my goal will be to take a ciphertext belonging to an individual party i and somehow expand it into a multi-key ciphertext which encrypts the same message but under this expanded key. So I want to take an individual party ciphertext c which satisfies this equation, encrypt some x under the secret key ti, and create a bigger ciphertext, so called c hat, which satisfies the same equation but with respect to this expanded key. And everything grows. Also the gadget matrix here grows just like a bunch of gadget matrices combine together. Now if I can do this, then I can just expand every party ciphertext, get them all to be ciphertext under this expanded key, under the same key, and then do the homomorphic operations just like I did before on the expanded ciphertext. So then I'm done. And then I can decrypt it with this expanded key. Okay, so as long as I can do this part, I'm completely done, all I need to show you is how to do this. And so I'm going to show you how to do this for two parties, you'll have to trust me that everything stands naturally for more parties. Okay, so here's two parties that have two different publicly secret key pairs, except I'm cheating a little bit, they're going to use the same matrix B. You can think of that as some common parameter, like they're all using the same grouper generator and an elliptic curve group or something like that. You can think of that as an analog. Okay, so they're using the same matrix B but they're choosing the actual public key and the secret key independently. Okay, and so now party one has some encryption of a bid x that looks like this. And in addition, it's going to create some helper information that will allow us to do this expansion step. I'll tell you what that is in a little bit in the next slide. But remember, this is the main equation that holds for the ciphertext. This is what it means to have a good encryption of x. But now let's see, let's try to do something odd and try to decrypt the ciphertext with parties to secret key. So that shouldn't work, right? You shouldn't be able to take a ciphertext generated by party one and decrypted with a secret key of a different party. That would be bad. But actually something interesting happens, you can sort of follow the math, you get the right thing except that you get an additional factor over here which depends on the two public keys of parties one and two and the randomness R that was used to create this encryption. So you get the right value you want plus some blinding factor over here. Okay, and so this will allow us to create an expanded ciphertext he had which just looks like this. I'm going to put C in the diagonal and a matrix D over on the top right here where I'm going to call D an unmasked term and it'll satisfy this property at T1D exactly cancels out this blinding this mask term over here. And if I do that then the right equation holds. So this ciphertext C encrypts the same bit under the expanded secret key. So on the left hand side you just get T1 times C which is X1TG. On the right hand side you get the combination of this term and this term. Maybe there should be a minus sign over here. Okay, so that's great. Everything works out. I still haven't told you what this helper info is and how to create this matrix D. Those are the two things I have to tell you. So just to again to just recall what our goal is now I have to tell you how party one can create some helper information that will later allow us to create this matrix D satisfying this property. And you might think why can't the helper information just be this matrix D? That would be the easiest thing right because party one has T1 so it can actually create the matrix D to do that. Well the promise party one doesn't know B2 it doesn't know the public party two isn't even around yet there's party one doesn't know anything about party two they're totally uncoordinated right. So you have to do you have to create this helper information universally but then able to be able to expand the ciphertext of party one to be a multi-key ciphertext for any other party two that comes along. And here's the idea the helper information is just going to be another GSW encryption of each bit of the randomness R used to create this matrix C. Okay, so we're just going to just create a bunch of GSW encryptions of each component of this of this matrix R. Now that's okay I'm just encrypting some data and then I'm encrypting the randomness used to encrypt that data that doesn't hurt security. Okay satisfies this property. Later on when I get the when I when party two comes along and I do have the public key of party two I'm going to take all of these ciphertexts and homomorphically combine them to get essentially a ciphertext D which encrypts b2-b1 times R. Now this is not exactly the same as GSW homomorphic computation this ciphertext D sort of encrypts a vector whereas these ciphertexts the GSW encrypt ciphertext encrypt just bit but essentially using the same operations as the homomorphic computation GSW I can do this. It allows me to do this this part. So that's really the main idea and now everything works out. So now once I have this matrix D I can expand the ciphertexts of party one to be a multi-key ciphertext for parties one and two. So that's the multi-key FHE this this is enough to do multi-key operations. The one last thing I want to tell you is how do you actually decrypt the ciphertext the ciphertexts you get at the end in a distributed manner. So remember at the end we have some expanded secret key t hat that's just a concanation of the secret keys of all the parties. We have an expanded ciphertext c hat that satisfies this equation and we want to somehow decrypt it in a so if I had t hat in full I could just multiply and recover x but we want to do this computation some sort of a distributed way where I only learn x and nothing else and you could see that even if we could multiply t hat times c hat in a distributed way that wouldn't be good enough because you would learn all of x tg if x is one that means you learn the secret key of all the parties right so we don't actually want to do this computation we just want to look at x so the first step is going to be a step that I'll call sanitization step where we take the ciphertext c hat but somehow shrink it down to sanitize ciphertext that I'll call little c hat by just taking g inverse of w where w is this vector just all zeros and q over two at the end what happens now if you take t hat in a product c hat a little c hat you get this value now the so t times c is x tg g and g inverse cancel out so you get x times t w this vector so w has just zeros and q over two at the last also each one of these t's as a one is the last component so you just get x times q over two something close to that plus noise okay but the nice thing is that now t hat dropped out of the equation so it's safe to compute the inner product of t hat times c hat it wasn't safe to do that before so now what do we want to do we have the ciphertext c hat that just consists of n components and we want to do the inner product with t hat well that's just the sum of ti ci should be the subscript right so this gives us a really natural distributed decryption procedure where each party i just computes the inner product of its secret key and its component of this expanded ciphertext and then we add them up and round for security each party is actually going to add some extra noise to this and this will ensure that actually the amount this value pi this partial decryption the party is releasing doesn't reveal too much about the secret key in fact you'll be able to simulate pi just given the output the value that's encrypted in the ciphertext if you knew the secret keys of all the other parties so if all the other parties were malicious your partial decryption doesn't reveal anything else beyond the output it's not exactly clear if that's enough to do multi-party computation if that's a strong enough secure notion turns out that this is good enough to do mpc with a little more work you have to do some additional tricks to to uh be able to use this security property to actually do secure multi-party computation okay so uh that's pretty much what i wanted to tell you so let me just conclude um so we show how to do multi-key fully homomorphic encryption we get uh with uh with a nice distributed decryption procedure this lets us do two-round multi-party computation from lwe and uh several open problems remain so one open problem is to uh remove the public parameters in the multi-key fe construction so right now we need to rely on some common randomness we'd like to remove that this would also mean that we would be able to remove the common reference string in our semi-honest two-round mpc for malicious security unique common reference string for semi-honest you might not need it we have it but hopefully you know if you solve this problem you can remove it uh we'd like to improve the efficiency so right now when you expand the ciphertext every all the dimensions blow up by a factor of number of parties and so all the operations are now more more costly and doesn't that doesn't seem inherent so uh that's a good open question question and uh last open question is can you do non-compact multi-key fe from simpler assumption like ddh you might think that multi-key fe is more complicated than fe which is true but fe is trivial if you don't require compactness if you don't require the ciphertext to be small whereas multi-key fe is actually still interesting and if you think about the application to multi-party computation we didn't really need compactness to do it even if you didn't have compactness you get a multi-party computation protocol so um maybe you can do this from simpler assumption maybe you don't really need learning or other assumptions maybe you don't really need learning with errors questions can you elaborate a little bit more on what extra stuff you have to do to get mpc at the end yeah um so it turns out you get mpc you without doing any extra stuff you get a multi-party computation protocol we just secure where all but one parties are corrupted so just one party is honest and everyone else is corrupted it turns out this doesn't that's not necessarily the strongest notion it doesn't give you security when like two parties are honest and n minus two are corrupted and um so we showed that there's actually a fairly generic transformation that lets you take a protocol that handles all but one corrupted parties and lets you use it on you have to sort of change the function you're computing it on but you can use a protocol that handles all but one corrupted parties to handle any number of corrupted parties and that's a fairly generic transformation okay that's because you got two corrupted parties let's say then then you don't know how they're oh it's because you if okay yeah so I can see there's only one if there's only one on this guy then the difference that you know you know what he's gonna say you can simulate what he's gonna say but if there's two honest parties then you don't okay I see what yeah so if there's one honest party I can know what its partial decryption is gonna be but if there are two I know there's some but I don't know which one is I don't know what they are individually so I don't know how to simulate two two honest parties and every all but two corrupted I can simulate all but one corrupted but I don't know how to simulate all but two no it's yeah it's a general transformation no more questions okay let's thank speakers again and conclude for the day