 Hello, and thank you for clicking on this video, where I will try to explain, in the next 20 to 25 minutes, how to achieve CCA2 security under adaptive corruptions in the standard model without pairings in the case of non-interactive threshold cryptography. This is a joint work with Benoaliber, Cohen-Gouyen, Thomas-Petersen-Motillon, and I will first give you a brief overview of what we did, then I'll give you more formal definitions of threshold cryptography, building blocks, and hardness assumptions, and then I'll give you, for the first construction, based on the DCR assumption, the full construction, and for the second construction, based on learning with errors, which can be seen as a thresholdized version of Tuorregeth. I'll give you the intuition of how we did it, because of the time constraint. First, what is threshold cryptography? So in this talk, we'll be mainly focusing on threshold public encryptions, because this is what we did. So the idea, in the case of TPKE, threshold public encryption, is to distribute the decryption algorithm. So the sender of the message does not change its behavior at all, so it has a message, encrypts it with the public key, and sends it to Bob. But Bob, instead of decrypting the message itself, it sends the message to end servers, which are all going to compute a partial decryption called MuI. And this is important, they do it alone, they do not cooperate between each other, so that's where the non-interactive part comes from. And the combiner that Bob has, when it is fed at least K out of n partial decryptions, is able to recover the message, hence the name threshold cryptography. Our goal was to build a threshold public key encryption scheme, which satisfied four constraints. The first one is the compactness, the size of the public key and ciphertext must be independent of the number of servers. The CCA2 security, which is very similar to the security in non-facial PKE, and it must be satisfied under adaptive corrections, where the adversary can obtain an ESKI at any time. And this should be done without using parents. If we survey what was done in the latest 20, in the previous 20 years, which is when non-interactive partial cryptography started to emerge, we see that this was previously not achieved, except maybe for the construction from Lieber and Jung, which achieved CCA2 security under adaptive corruptions and also has compactness, but it uses pairings, which is something we would like to avoid, to give you more insight on our constructions. The ciphertext size of the first construction is about three times the size of a KAMINISH soup encryption, which is a verifiable encryption, not threshold at all, so that's pretty good. But the second construction needs a super polynomial modulus to have its security, and this is less interesting, but it's still worth mentioning because the learning with errors assumption is quantum safe. So a formal definition of TPKE is that it is comprised of five algorithms, which interact as in the following diagram. So when we compare this to standard public key encryption, we notice that the key generation algorithm takes the threshold T as input, and it generates L secret keys instead of one, but the interaction with Bob is unchanged. Bob gets the public key and crypts the message, returns the ciphertext. This is where everything changes, because Alice now forwards the ciphertext to all servers, which are all going to compute a partial encryption query, except if they are offline or something happened to them. Alice is then able to compute with the partial verification algorithm whether the partial decryptions are troughful or not, and when she has enough troughful decryptions, she can compute n prime, which hopefully will be equal to m. It is compact if public key and ciphertexts have a polynomial in lambda size, and it is correct if as long as I have at least t partial decryptions, then m is equal to m prime with non-negligible probability. In this talk, I will be ignoring the partial verification algorithm. This is something that is taken care of in our full paper, but its role is somewhat apart, and due to time constraints, I won't be able to explain in detail its role. I can however explain what is the adaptive in CCA2 security for the TPKE. The idea is to take the CCA2 security game for PKE and to adapt it for threshold probability encryptions, so instead of having a decryption oracle, the adversary has access to a partial decryption oracle for any server. Another difference is that A, at the beginning of the game, chooses the threshold on which the key generation algorithm is run, and afterwards everything plays as usual. It gets a public key, it chooses two messages, one of which is encrypted, and the adversary has to guess which one was encrypted. It also has the following power, the following advantage. It can obtain any secret key share at any time. This is the adaptive corruption setting, and we also allow the adversary to make partial decryption queries for the challenge, of course, as long as it does not trivially break the game. And the definition of the advantage is pretty standard. From this, we move to the building blocks that were used in our constructions, the most important one of which being the linear integer secret sharing. At a very high level, the idea is that you have a secret S, which is an integer, and it's shared into M key shares among N persons, and when K of them are working together, they can recover the secret S. And if K minus one of them are working together, they cannot recover the secret. More formally, it is actually the given of a matrix M, parameters D, E, and a subjective map psi from D to the set of servers, such that when I want to share an integer, I choose random elements and complete S into a vector of size E, multiply it on the left with M to get another vector of size D, of length D, and each coordinate of S I give to party psi of I, thanks to my map psi. This can be very quickly generalized to the sharing of vector S instead of just an integer S, and we share it into matrices which have different sizes, depending on the number of rows given by psi to server I. And it has the following nice property. If I take any subset which has at least t elements out of L, then there exists a reconstruction vector whose coefficients are small, so small that actually it's minus 101, such that if I multiply the key share on the left by the reconstruction vector and sum all vector that I get and like that, I find the secret back. Something important also are the harmless assumptions used in our constructions. For the first one, we need the decision composite residuality assumption. It states that N to the zeta residue, if I take one uniformly mod N to the zeta plus one, it's actually computationally indistinguishable from the uniform distribution over the invertible elements of ZN to the zeta plus one. The second problem, the learning with errors problem, is used in both constructions, and it states that if I take a random matrix A, a random vector S, and a small random error vector E, the distribution of A as plus E is indistinguishable from A B, where B is just a uniform vector. So with all of that, we can now move to the constructions, and we see, yeah, let me first give you a bit of intuition on our first construction. So it's a bearing free adaptation of an already existing scheme, and our idea is to, I mean our idea is similar to the one used in the communist ship verifiable encryption scheme, where they exploit the entropy of shared secret keys and build a DCR-based hash proof system. To get the security, we will have ciphertext comprised of two elements, and we need to prove that the first one is an N to the zeta free ZU in ZN to the zeta plus one star. And this proof system is an ISIC instantiated from the Fiat-Chamiar transform and correlation intractable hash functions, and this is where the learning with errors assumption comes into play, because the correlation intractable hash functions are built from learning with errors. However, the sigma protocol used and turned from Fiat-Chamiar into an ISIC is something we provide ourselves. It's a new construction for an argument which satisfies the one time simulations on property, which I will come back to later. So to give you the full construction, the idea is to set an RSA modulus with PQ safe primes and to generate a common reference string for the NISIC, for the language of N to the zeta for ZU mod N to the zeta plus one. Next, we set the public key, which is the couple J0H, and the secret is the small X, which is used to build H. So now that we have a secret, we simply share it using the list, as we have learned to do. And we are done. We can output the public key and the key shares. So to encrypt, one can simply sample small R, compute J0 to the 2N zeta R mod N to the zeta plus one, and use the message in the second part of the safe text. For the decryption, it's interesting to notice that C0 to the 2X is actually H to the R. And then it can be removed from C1 and we can recover the message. But first, we are encrypting. So we compute a proof that we did the truffle encryption by proving that C0 is indeed an F to the zeta for ZU mod N to the zeta plus one. And we return the safe text and the proof. To compute a partial decryption, we first check the proof. If the proof is accepted, then we compute C0 to the 2 times every coordinates that were given to us by the list. And we return the set of everything, of every partial decryption we did. Finally, when we have enough of these partial decryptions, we can find a reconstruction vector. And we can compute this big product. This product is actually turned into a sum when you put it in the exponent. And when you do so, it's actually C0 times the sum of the reconstruction vector times the coordinates of the key shares. So in the end, you should get C0 to the 2X. From this, you compute C1 hat, which is C1 over mu hat mod N to the zeta plus one. And you actually removed the H to the R part. So normally, if everything is done correctly, this is now just one plus N to the message. You check that it is the case by looking at this. And if everything is all right, you can recover the message. And this is it for the construction. It's of course correct and compact, because if we take a look at the public key, it's actually independent from the number of servers. And an encryption is also independent from the number of servers. It's then CCI2 secure under adaptive corruptions under the assumptions that the DCR assumption holds. And the music we use is one time simulation sound. It's one time simulation sound. This means that if an adversary sees one simulated proof for any statement of its choice, it does not help the adversary to forge a non-trivial a proof for a non-trivial false statement. So as I've said, we give such such argument system under the DCR and learning river assumptions, it's actually an improvement of an unbounded simulation sound construction, which was already existing. But we get shorter public parameters because instead of unbounded simulation soundness, we only need one time simulation soundness. Finally, the proof is inspired by previous work, which concerns distributed pseudo-random functions. And it exploits the fact that secret keys still have a lot of entropies, even after every query is done by the adversary. And it uses the properties of a list, because I did not give you all properties of a list, but only the one useful for the constructions. Give you an idea of the proof. The DCR assumption allows you to move to a game where the challenge encryption is made using the secret key. So the message is now hidden by by x mod n to the zeta. However, you know that if you condition on a's view, x is Gaussian in some shift of p prime, q prime, time z. And thanks to a lemma from Gentry, Picard, Vicotanathan of 2008, the distribution of x mod n to the zeta is statistically close to uniform. So it's impossible to retrieve which message was encrypted by looking at the challenge ciphertext. Finally, to give you a quick idea on how we build threshold public encryption from dual regif, we have the following remarks. First, if you remember how dual regif is made, you take a random, the secret key is some random matrix, and the public key is two matrices a and u, which is a times r. And even conditionally on u and a, the secret key r still has a lot of entropy, which is important for us. As you may have guessed, we share each column of the secret key using a list scheme. Then when someone wants to encrypt, it encrypts as in the dual regif, non threshold case. However, when you decrypt to compute partial decryption, you do not have the full secret, but only a share of it. So you still use your secret as you would use the secret key. But at the end of your partial decryption, you still, you need to have some noise to do some noise flooding to avoid leaking too much information on your secret key share. And this is where the super polynomial modulus comes from. Once more, our security proof follows from, follows ideas from the distributed PRF's paper. And this is almost done. We are almost done, except that we need to get in CCA to security, which is something that your regif does not satisfy, even in non threshold case. So we add a simulation sound argument that the ciphertext are well formed. This means that they are the form of the following form. And this is some argument system which already exists and we build a top of it. So to quickly conclude on this construction, we still have an open problem which is how do we avoid noise flooding and how do we get to use a polynomial modulus while keeping compact ciphertexts. This is it for this video presentation. I hope you liked it. Thank you for listening and see you later.