 Thanks again for introducing me. So I'm going to talk about threshold cryptosystems from threshold fully homomorphic encryption. Let me now motivate you first. So I want to introduce you to the characters. There are two characters and the first character is Tony Stark and he'll play the role of the good guy. There is another character which is like Thanos and he's adversarial and like very powerful. So consider this following motivating example. Suppose Tony wants to send across some messages over a channel and let's say he maintains no private information whatsoever. So of course in this setting it's pointless to ask for any kind of security. But so it's essential for security to have some kind of private information. So now let's say he generates a key for an encryption scheme and uses it somehow to encrypt the messages before sending it over to the channel. In this setting Thanos suddenly finds himself in trouble because now he cannot understand what he's trying to say. But as we all know, key management is a hard problem and it's prone to all sorts of attacks such as side channel leaks, social hacking, human error, so on and so forth. So now if Thanos gets access to this key of Tony's he can understand everything and he's happy all over again, okay? So this is the main question that we ask in this paper. Can we address this issue at more basic and fundamental level? And this area is off that of threshold cryptography. So let's just consider a knife solution first. Can we divide the key K into shares S1 through SN and store them separately? So you have a key and you just use a secret sharing scheme and store it across servers. Now, but note that such a dumb way of secret sharing is not going to lead you anywhere because Tony has to reconstruct all these secret keys in order to do anything meaningful with it. So as an example what Tony would like is that if he wants to generate a signature he would like to share these partial signing keys on these servers and these servers should be able to issue you some sort of partial signatures which Tony should be able to combine to get a signature. And more firmly we require the notion of correctness where we want that each server can independently compute on the share to generate this F of SI and these evaluation shares we can, we should be able to combine them publicly to recover the final computation F of K. And from security we want that it should be hard to form the final computation F of K without knowing all key shares. But in cryptography we study more fine-grained access patterns and classical and historical example is of that of threshold which is T out of an access pattern. And here we want that each from correctness we want that each server can independently compute on the share F of SI and then any T evaluation shares can be publicly combined later to form the final computation. And again from the security what we want is that it should still be hard to form the final computation F of K without knowing TK shares. And in this example suppose Thanos suddenly snaps and corrupts half of the servers on the earth he should still be in trouble and not be able to break security. Let's just look at this example concretely. Let's just look at threshold signatures. Now in this application Tony wants to have a sign on a message. Suppose he has already secret shared his signature key with N servers. He should be able to relay this message to the servers and then get some kind of partial signatures which he should be later able to combine to recover the final signature. And as with all signature schemes he would require this notion to satisfy various sorts of requirements such as unforgeability, compactness, correctness, robustness, et cetera. Another example is a threshold public encryption here. Let's say that Tony has secret shared partial decryption keys to the servers and he gets a ciphertext. He should be able to relay these ciphertext to the servers and then servers should be able to give out partial decryptions which he should be able to combine to learn the message. And as with all encryption schemes you might ask for various sorts of requirements such as CCA security, compactness, correctness, robustness, so on so forth. Let me again tell you that there has been tons and tons of work in this area. There's been work on RSA signatures, Nord signatures, ECDSA signatures, BLS signatures, Kramership encryption and many other words. But most of these words have focused on specific primitives and here we would like to focus more on a general framework to capture these primitives. So let me summarize our results. So we construct threshold FHE, then we formalize this notion of universal thresholdizer. Then we show that we can use this universal thresholdizer to as a general tool to construct threshold crypto systems and then we construct this universal thresholdizer from threshold FHE and this immediately gives rise to all sorts of threshold crypto systems such as threshold signatures, CCA secure PK, distributed PRS, function secretion, all from LWE. And many of these notions were not known to construct from LWE before. So let's just recall what threshold fully homomorphic encryption is and even before that let's just recall what fully homomorphic encryption is. So in homomorphic encryption you have four algorithms. There's a setup algorithm which takes in the security parameter and it gives out a key pair, a public key and a secret key. And then using the public key, you can encrypt any message to give out a ciphertext and then there's this evaluate algorithm which takes the public key, the circuit and bunch of ciphertext to give out an evaluated ciphertext and then you can decrypt this evaluate ciphertext using the secret key to recover the evaluated message. And we want this evaluate message to be equal to the circuit applied on M1 through MK where CTI had encrypted message MI. And now let's just move on to threshold fully homomorphic encryption so I'm just going to tell you what modifications we make in the syntax. So now set up instead of just taking security parameter it additionally takes N and T where N is the number of parties and T is the threshold. And now instead of just one key it gives a public key and N partial decryption keys as opposed to just one secret key. Then you can encrypt any message M using the public key as before. You can evaluate as before using public key, circuit C and ciphertext CT1 through CDK. And but here the decryption runs in two phases. The first phase is that of partial decryption which essentially is the decentralized version of decryption which takes the partial decryption key of any party I and the devalued ciphertext and it outputs a partial decryption. And then you should be able to combine all these partial decryptions in using the final decryption algorithm to recover MA value. And we require some sort of correctness as in FHE and the correctness notion will just say that if I pick set of partial decryptions for some ciphertext corresponding to some set of size T I always recover the correct output that is C of M1 through MK. So now let's move on to another efficiency requirements and it's called compactness. It just says that the size of public key, size of ciphertext, size of partial decryption should not grow too much. They should just be small. Let's look at the security notions of this primitive. So there are two notions and the first notion is the more intuitive one, it's called semantic security which just says that if I'm not given T secret keys, if I'm given less than T secret keys, then encryption of any message M0 should be indistinguishable from encryption of any message M1. The second one is more technical which just talks about that partial decryptions should not leak too much information about the ciphertext. They should just hide the ciphertext. This is the way to capture this is by simulating, by just showing that this partial decryption can be simulated knowing set of T minus one secret keys and the message that was encrypted by the ciphertext. How do we construct this? So our starting point is of that of GSW FHE schemes and let me just recall the properties that we use. So in GSW, the ciphertext is a matrix of dimension L cross L with entries zero and one. And secret key S is a vector in ZQ to the L where Q is some modulus. And the secret key has the following structure. The structure is that the first L minus one components of the secret key are random whereas the last component is the floor of Q by two. This notion, this GSW scheme satisfies what's called as approximate eigenvector property which is that if you have any ciphertext CT, when you decrypt it, when you take multiply S with CT what you get is message times the secret plus a noise. Here message is the message that what you had encrypted inside the ciphertext and noise is just a small vector. So I'm recalling this again on this slide. And now we observe that when you do CT one plus CT two and you try to multiply it with S what you get is M one plus M two times S plus noise. So it just can be seen that CT one plus CT two is homomorphic addition. And similarly if you multiply CT one times CT two you observe that S times CT one times CT two is just M one times M two times S plus noise. And CT one, CT two can be treated as evaluation, multiplication of two ciphertexts, okay? So now this way we can define decryption as follows. We just say S times CT times a public vector 001 transpose and when you compute this you get M times q by two plus noise. So note that decryption is linear in S. So this immediately gives rise to some sort of initial idea that maybe you can secret share this secret key S into vectors, into n shares and then use them to compute partial decryption. So the idea is that we can share this secret key into S one through S n and again I'm just going to recall that in Shamir secret sharing if I have any set of size T there will exist Lagrange coefficients such that my secret S is just lambda linear combination of these shares. So this is the initial idea we can define the partial decryption as just S i times CT times 001 transpose where S i was the share and you can define final decryption as lambda linear combination of these partial decryptions. And note that when you simplify it you just get S times CT times 001 transpose and you just get M times q by two plus noise. It seems like we are fine. We are good here, we get correctness but now we know that FHE decryption should just reveal the message and nothing more. In fact revealing this noise turns out to be fatal and it just leads to attacks. So how do we hide that FHE noise? The idea is that what we can do is perturb these partial decryptions with some further noise in a hope that it will hide the FHE noise. So now we can define final decryption as just lambda linear combinations of these partial decryptions. But when you evaluate it further what you get is S times CT times 001 transpose plus a linear combination of noise. But now we run into correctness issues because this lambda combination of noise is just too big because these Lagrange coefficients can be huge. So correctness is lost. Now we ask how do we fix this noise blow up issue and how do we proceed about this by defining new linear secret sharing schemes with low norm reconstruction coefficients? So there are two ways in which we address this issue. The first is by giving a more general purpose secret sharing scheme which supports broader access patterns and the second is we give a more direct modification of Shamir secret sharing and this leads to more efficient scheme with shorter keys but slightly larger ciphertext. Let me just tell you more about our first approach which is just constructing a new linear secret sharing scheme with low norm reconstruction coefficients. So let me recall what linear secret sharing scheme is. A linear secret sharing scheme consists of following algorithms. You can share any secret k and using any access structure to give out n shares and then you can combine a set of these shares as follows. If the set S is qualified then there will exist some efficiently computable coefficients CIs such that our secret is just C linear combination of these shares and now we define this 01 LSS as a class of linear secret sharing schemes where the reconstruction coefficients are always binary. One might ask how expressive is this 01 LSS? It turns out it at least captures the class of monotone Boolean formulas. So let me just take an example of this circuit. Suppose I wanted to secret share this secret k. So first gate is AND gate and I want that k should be reconstructed only if you have shares corresponding to both these wires. So what I will do is assign random on one wire and k minus random on the other. Next gate we encounter is the OR gate. So here I want that R1 should be possible to be reconstructed as long as you know shares corresponding to either of them. So I will assign R1 on both the wires and similarly I can also handle the third AND gate by assigning random on one wire k minus R1 minus that random on the other wire. So this way we know four shares and note that now secret is just addition of some subset of the shares. So to answer the question, 01 LSS at least contains class of monotone Boolean formulas and it and valiant in 84 showed that these threshold functions can be expressed by a monotone Boolean formula. And again I'm reminding you that 01 LSS, the reconstruction coefficients are either zero or one. Let's just now use this idea to fix our problem. So we had defined partial decryptions as SI times CT times 001 transpose plus noise and final decryption as lambda linear combinations. So now when you evaluate this, you get the error term as noise times lambda I, but since these lambda's are small, you get m times q by two plus noise plus small. So this does fix correctness issue. However, it needs careful security analysis which I'm not going to talk about and I will refer you to the paper for details. Now let me just quickly tell you about the more direct approach that we pursue. So this is known as the clearing the denominator strict and the basic idea is following. If you have any Lagrange coefficient, any Lagrange coefficient lambda is just ratio of products of bounded integers and you compute this over zq. So when you multiply this Lagrange coefficient with n factorial due to some cancellations, what you get is a bounded integer and you can show the same thing for lambda inverse. So this is just the basic idea and we use it to build the scheme and I will again refer you to the paper for details. Here is a short comparison of the two schemes. So for the first scheme talks about zero analysis. Here the ciphertext size is in poly lambda in D where lambda is the security parameter, D is the depth. Size of the key and the partial decryption is n to the 4.2, n is the number of parties and poly lambda D. But again, this is only for threshold access structures. This scheme is more general and for smaller and other access structures, it might even be smaller. The second scheme, the size of the ciphertext and the public key is n times poly lambda D. The size of the key and partial decryption is also n times poly lambda D. However, this works for only threshold access structures. Now, given this threshold FHG, how can we address this issue of threshold signatures, for example? Now suppose Tony wants to sign. He has a setup for the threshold FHG. He can give secret keys to the server and he gives us encryption of his signing key to all these servers. Now these servers can, using the encryption of the signing key, can compute an encryption of the signature on the message that he wants to compute. And then once you have these encryption, these servers can provide the partial decryption of this encryption and then Tony can combine to get a signature. So this exactly is the idea. In fact, we build the notion of universal thresholdizer by capturing this idea. However, there are more details about robustness and all those things, which I am not going to cover in this talk. So let me just tell you that this scheme is like directly using threshold FHG, using the idea that I just described. And I will skip the details. Let me summarize our results again. So we construct threshold FHG. We formalize the notion of universal thresholdizer. Then we show that universal thresholdizer is a general tool for constructing threshold cryptosystems. Then we construct universal thresholdizer from threshold FHG. Then this gives rise to various sorts of threshold cryptosystems. And many of them were not known to construct from lattices before our work. Also, although like we use very simple techniques, but these techniques have also found use later on in follow up works. So first I will talk about this notion of lazy MPC. This is an MPC model where honest parties can just go to sleep. So they might be limited in computing power, they might lose connections. And for all sorts of reason, they can just go to sleep. And they are treated differently from corrupted parties. In this setting, we would want some kind of correctness as well as security requirements. It turns out that this model implies has a theoretical outcome. And the outcome is that it leads to first MPC with guaranteed output delivery in standard model in three rounds. And this work is concurrent with another work of Anant et al. However, their work, the focus is completely different and their assumptions are also completely different. Also, threshold FHG as such has also found use in things like amplifying security. In particular, interestingly, it turns out that using threshold FHG, you can, given any FUR IO candidate with weak security, you can output a fully secure candidate. And this idea appeared recently in a work. Okay, so let me just end this with some description of open problems. We use FHG as a tool to general, for to build this universal thresholdizer. But can we build this universal thresholdizer from not relying on the heavy machinery of FHG? Another way to ask is, can we give a more efficient construction? Maybe for more simple classes of functionalities. Are there more applications of this threshold FHG and universal thresholdizer? And another interesting question is, can we get better assumptions? So in particular, can we get this from LWE with polynomial approximation factor? So with this, I would like to just conclude and if you want to ask questions, just feel free.