 Hello, my name is Thomas Prest, and this is joint work with Shuichi Katsumata, Chris Kredgowski, and Federico Pintore. So the main question that we are concerned with in this work is how efficiently can we share one session key between n plus one users. And the use case that originally motivated this work is secure group messaging. If we consider n plus one users that wish to communicate together in one group, then a simple way to realize that is for all of them to share a session key that they can use to encrypt messages using symmetry key encryption with each other. But first, they need to agree on this key. And one naive solution is that if all of them have a public key, then we can use public key encryption schemes and send and one of the users will generate a session key and encrypt the session key to all of the other users and broadcast all of the cipher text. So if we use that with El Gamal, since each cipher text has two group elements, then the quantity that will be broadcast by the sender user will be two n elements. And in 2002, Kurosawa did a nice algorithmic observation, which is that in El Gamal, actually you can do some sort of randomness reuse that was the term that was used. So if you consider the left part of the cipher text, so g to the r i, you can see that it does not depend on the public key and it does not depend on the key. So what we can do is reuse the same left part for all the users. And then it's only the right part, which will scale in the number of users. So in this example, what this would amount to do is to send g to the r, then pki to the r times k for all the users i. And instead of sending two n elements, then this user only need to broadcast n plus one elements and asymptotically it saves a factor 2. In terms of terminology, besides randomness reuse, we have seen the term cipher text compression or at a more syntactic level if we consider the syntax of describing the security notions that we are trying to realize. We have seen the term multi-recipient chem or multi-recipient pke or aka mchem or mpke. And since 2010, there has been a lot of papers considering this problem. But to the best of our knowledge, there is almost no post-quantum proposal. We have seen one paper trying to propose a construction from the LPN problem. But we also found that this paper was broken. So as far as we know, there is no post-quantum secure proposal. So this work aims at revisit the notion of mpke and mchem. So a first contribution that we do is to propose a more natural definition. And this definition is more natural than the previous ones in the sense that it captures classical assumptions that were captured by the previous ones, but it also captures post-quantum assumptions. And in addition, this new assumption and new definition is amenable to an efficient curum proof. And we also do this curum security proof. In the second part of this work, we instantiate our constructions from post-quantum assumptions. So from lattices, from isogenes. And something that was a nice surprise to us is that the efficiency that gain that we obtain is one or two orders of magnitude. So we obtain a communication cost that asymptotically is one or two orders of magnitude smaller than what we had previously. And then finally, we apply our techniques to TricM. So TricM is part of the MLS protocol, which is a protocol for secure group messaging. And TricM is the bottleneck of MLS. And what we did was, we observed that there is a simple but powerful interplay between the notion of M-CAM and between TricM, and that basically combining the two of them together allows to divide by two the communication cost. All right. So first, I will talk about revisiting MPKs and M-CAMs. OK. So first, I want to talk about the abstract notion, the abstract assumptions that once is realized, it allows to construct M-CAMs and MPKs. So if you look at previous work, for example, a work by Bellarre, Boldireva, and Stanton, the notion that they put forward is the notion of full reproducibility at the top of this slide. So if we consider a ciphertexte, which encrypt a message M under the public key PK1, full reproducibility assumes that you have a polynomial time algorithm A that takes the ciphertexte as input, takes the public key PK1, the second public key PK2, the private key SK2 associated to PK2, and a second message. And this polynomial time algorithm transforms the ciphertexte into a ciphertexte of M-prime under the public key PK2. And in this work, we put forward a more natural assumption, at least in our eyes it is simpler and more natural. So we say that a ciphertexte on encryption scheme is decomposable. If it can be split into two parts, so the first part of a ciphertexte will only depend of on domain's coin R0, so it will not depend on the public key and it will not depend on the message. And then the second part, which we call CTI hat, it will depend on the public key PKI, on the message, on the randomness R0, and of a second chunk of randomness R1, RI. And if we look at the example of El Gamal, then El Gamal satisfies full reproducibility as well as decomposability. So if you have a ciphertexte GRPK1R times M, then in the example I have shown, this ciphertexte can be transformed to encrypt any messages under any public key for which we know the private key. And it is also decomposable because you can see that the left part does not depend on the public key and on the message and only the right part does. And how do we exploit that to achieve mchem or mpk? So a ciphertexte with n-recipients will be CTRO, which will contain CT0, and then it will contain CTI for each eye. So the left part of the ciphertexte will scale with the number of users and the left part will remain constant. For key generation and decryption, they remain the same. The second contribution is that we propose a generic transformation which takes as input an ncpa mpke and transforms it into an ncca mchem. At the technical level, we don't propose any new technique, but rather what made our life simpler is that if you look at previous works, previous works took as input an ncpa mpke. So it did not consider that it was decomposable and it did not consider that it was an mpke. And so we start from a somewhat stronger assumption, but something that is really nice for us is that this assumption is actually verified by several pke's. And this is something that we will show in the rest of this talk. So it's actually a very natural assumption to start with. And our transform is simply a variation or a generalization of the Fuji-Zaki-Ukamoto transform. So in the sense that when we are going to, so we will generate a random message and then we will use this message to generate the random coin for CT0. And then we will also generate CTi by taking pki, the message, g1 of m, which will be a random coin generated by m, and g2 of pki m, which is a random coin generated by pki and m. And then the session key is going to be the hash of m. And for the encapsulation, well, it does the same as what is done in Fuji-Zaki-Ukamoto. So you do the encryption and then you do re-encryption. And you check that your safer text is equal to the new safer text that you computed. And if not, then you're going to return bottom. And something to take into account is that you have to be careful that when we do the re-encryption, it should not, the time of the re-encryption should not be linear in the number of users. Because if we do that, then you will lose a lot of efficiency. And so a simple way to achieve that is simply to, in the same way that we have a decomposable in CPA MPKE, we have a decomposable transform. In the sense that here you can see that CT0 does only depend on the message. And it's only CTi that will depend on the public key as well. And this is nice because when we do decapsulation and we do re-encryption, then CT0 will not depend on the public key. And CTi will only depend on the public key i. It will not depend on the other public keys. And this means that when you receive the safer text, you only need to receive the part of the safer text that is relevant to your public key. And this makes the whole algorithm faster. So in terms of the security proof, we use classical existing techniques. So we use compressed oracles that were introduced by Zendry. And so in this example, we achieve explicit rejection, but we can achieve implicit rejection as well. All right, so now I'm going to show how we can instantiate a decomposable MPKE from post-contum assumptions. So first, let's look at the Lindner-Pikert framework. So I'm going to assume that the audience is familiar with LWE. So in the key generation, we generate LWE samples. And then in the encryption procedure, we generate an LWE sample U, or other A and U is an LWE sample. And then we generate V, and V will contain the message M. And this framework is very generic and it encompasses several schemes, such as the NIST, Round 3, Finalist, and Alternate Candidates, Frodochem, Entrelprime, Kiber, and Saber. And what is interesting for us is that the Lindner-Pikert framework is actually decomposable. So one mild restriction that we have to do in order to do that is that the public key, the public matrix A should be the same for all public keys. And this is something that is not done in the schemes that I mentioned, but you can do that and the security remains the same. And then what is happening when you do that is that U becomes independent of PKNM. C, here U is equal to RA plus E prime. So that means that the scheme is decomposable. And then you can apply the generic framework that we described previously and obtain a multi-encryption scheme. So here you can see that you will compute U only once, and then it's only for the part V that you are going to repeat it N times for the N users that you have. And in terms of efficiency, a nice surprise that we had is that each VI is much smaller and faster to compute than U. This is due to at least two reasons. So first, it has shorter dimensions. And second, you do some sort of bidropping on V. So what you're going to do is that you're going to drop the least significant bid for each coefficient of VI. And this is again a nice for efficiency. In terms of security, the security of this variant reduced to the original variants of LWE, which in the sense that it is LWE with many samples. So it's different than the variant of LWE that the original schemes that I described rely on because they rely on LWE with a limited number of samples. But it's still a very standard version of LWE. We also did consider the SIDH scheme and its successor psych. So again, I will assume some familiarity with isogenic-based cryptography. So E is in elliptic curve and then we consider torsion subgroups that are generated respectively by PAQA and then PBQB. And the encryption procedure is what is the most interesting for us. So we are going to sample an isogenic fee of kernel RA. And then we will generate CT0, which is E quotiented by RA and then fee of PB and fee of QB. And for the second part of the safer text, we compute the J invariant of E quotiented by RA and RB. And we call it J. And J can actually be computed from the public IPK and from the ephemeral secret isogenic fee. And then CT hat is going to be JxM. And again, what is nice here is that we have a decomposable flavor because CT0 does not depend on the public key. It only depends of some random coin. And so we can apply the same framework as before and we obtain this scheme. And we show in our paper that the security of this scheme reduces to a decisional variant of the SIDH problem, which was introduced in a paper by DeFeo, Jaoh, and Plu. And finally, we did also apply our framework to C side, but this is in the paper and not in this presentation. All right, so now in terms of impact, what we did is studied the computational cost of using the normal scheme, the initial scheme, and our M-chem variant. And what we found is that our M-chem variant is significantly more efficient and consistently more efficient than the regular schemes. So in the case of C side, if you use C side for a group key exchange, as I described in the beginning, with a large number of users, then the cost per user is going to be 80 bytes. And if we use the M-chem variant, then the amortized cost per user is going to be a communication cost of 16 bytes. And for C side, we have even more impressive gain because it's 20 times more efficient in the compressed situation. So when we use M-chem, and the most impressive in our opinion is the case of Frodo-chem because we get a gain of a factor more than 60. And this is all only for the NIST level 1, which is 128 bits of security. And for example, in the case of Frodo-chem, the gains are even higher at higher security levels. All right, as a final application, I will show you how to apply M-chem to TRI-chem. So TRI-chem is a sub-protocol of the MLS Draft IETF protocol for group messaging. And it is a particularly important sub-protocol because it is the bottleneck in terms of communication and computation of MLS. And the reason behind that is that when you're using TRI-chem, you're sending and receiving a lot of public key material. So let me show you why. So consider we have N-users that are the N-users of a group that wish to communicate together. So these N-users are arranged as the leaves of a binary tree, as you can see. And every single node of this tree, including the leaves and including the root, is going to have associated to itself a public-private keeper. And all the users will know the public keys, but the TRI-chem invariant, which is very important for the security properties of TRI-chem, states that a user will know a private key if and only if it is in its path. So for example, the user at the bottom left knows only the private keys that are in orange on the figure. And an important operation in the context of TRI-chem is a user, the operation where a user that is compromised, so its private state will be compromised, will refresh all its private key material and then broadcast the relevant information in order to maintain the TRI-chem invariant. So what is going to broadcast is an update package that contains for each level of the tree. So it will contain one public key and one cipher text. So it will contain one public key for each node in its path except the root. Because actually, everyone knows the private key of the root so it doesn't make any sense to broadcast the public key of the root. And it will send one cipher text for each node in its scope path. And the cipher text of a node will encapsulate the seed that allows to generate the public key of its parent. And not also that when a user does some refreshing, the private key of a node that he refresh will be used as input of a PRG and the output of the PRG will be the private key of the parent. So this means that once you know the private key of a refresh node, you know the private key associated to all the parent, all its ancestor nodes. And so something that we try to do is to instead of using a binary tree, we use a NAMARITRY. And if we do that then, if we do that naively, we are going to send log M of N public keys and M minus one times log M of N cipher text. So at first sight, it's not obvious that this is going to give us a gain. However, something that was very interesting is that all cipher text has the same level encapsulate the same key. So for example, all the cipher text at the bottom left, they will all encapsulate the private key of their parent. And this is interesting because this means that we can use one single N cam at each level to encapsulate the same key for all the public keys that are concerned by it. And if you do the math, then you find out that compared to standard tree cam, we have a gain because here, if you look the formulas for standard tree cam and MRE tree plus tree cam, instead of having log two of N, we got log M of N. And then once this is chosen, we have the same proportionality of public keys and of CT zero. But in the case of MRE tree cam, we will have a number M of CTI hat at each level. So this means that you cannot make M too large because otherwise the size of your update package is going to be linear in M. But still, now we have a leverage in the sense that we can play with the arity of the tree and try to achieve some efficiency compared to binary tree cam. And in practice, we do in effect have a gain. So in this slide, we plot the size of an update package in kilobytes as a function of the number of users. And if you consider, for example, 65,000 of users, then if we use tree cam with size, in the binary case, each update package will be 10 kilobytes. But in the case of MRE tree cam, each update package will be less than four kilobytes. And similarly in the case of Frodo cam, instead of having 300 kilobytes, we will have an update package which is less than 70 kilobytes. So more than four times smaller. All right, so that concludes my talk. If you want to see our paper, it is on this link. And if you want the slides, it is on this link. And thank you for your attention.