 Hi, my name is Valerio, and I will be talking about how to obtain CCA secure camps from encryption with no negligible decryption error. This is a joint work with Bassa Ramaker, Denis Lamanin and Christoph Strix. I will start by briefly recalling the main definitions and security models used in public key cryptography, and discussing the role played by the decryption error in this context. I will then present the compiler that we have constructed, and the main ideas behind it. Finally, I will show its evaluation performance with respect to Denis's submissions, and shortly talk about further results you can find in the paper. One of the most important goals of cryptography is to achieve confidentiality of communication. For example, we would like to be able to allow two parties, Alice and Bob, to communicate over an insecure channel in a way that prevents malicious entity like Eve, to gain any meaningful information. Moreover, we would like to be able to do so even when no prior secret information is shared between the two parties. This is one of the greatest achievements of public key cryptography. This concept is nowadays used in our daily life. For example, when we browse to a website, the TLS protocol is used to share a secret key between us and the web server, which is then used to securely communicate. The main construction that allows such a functionality is the public key encryption primitive. Such a scheme is defined by three efficient algorithms, the key generation, the encryption and the decryption. Since public key primitives are less efficient than their symmetric counterpart, in practice, public key cryptography is only used in the beginning of a communication to exchange a secret key between the involved parties via a so-called key encapsulation mechanism or CAM. Public key encryption techniques and in particular CAMs are highly relevant for secure communication protocols of the future, especially when targeting post-quantum security. Clearly, the first property that one wants from a public key encryption scheme is correctness. In its strongest variant, we would like to have that for every possible message M, but the encryption of the encryption of M gives us M back. Of course, correctness alone is not sufficient. We want that such scheme are secure too. What do we mean by that? Security is usually expressed in terms of indistinguishability of various form, depending on whether security is formulated against a passive or an active adversary. The two most common models are respectively CPA and CCA security. These models take the form of a game between a malicious adversary trying to break the primitive and a challenger. To briefly recall such definitions, the CPA game where CPA stands for chosen plaintext attack goes as follows. The public and secret keys are generated by the challenger. The public key is then given to the adversary. The adversary has to choose two messages, M0 and M1, which are then sent to the challenger. The challenger samples a random bit and encrypts the corresponding message. The obtained challenge ciphertext c star is sent to the adversary. The adversary has then to choose which bit was sampled by the challenger and wins the game if the guess was correct. A stronger model is given by the CCA security game, where CCA stands for chosen ciphertext attack. This game is quite similar to the previous one, but the only difference is that this time the adversary is given access to a decryption oracle. He can call the decryption oracle on any ciphertext except the challenger ciphertext c star to obtain the corresponding plaintext. We say that a PKE scheme is CPA or CCA secure if any efficient adversary wins the corresponding game with probability negligibly close to one-off, where one-off represents the probability that the adversary has to randomly win the game. A natural question is if this last game represents a realistic scenario. In some real-world applications, this is actually the case. For example, as Plej and Bakker has shown, if the attacker has access to a server that accepts encrypted messages and return an error depending on whether the decrypted message had the correct form of padding, then an attack based on such information can be mounted, which allows the adversary to even recover the regional message for any ciphertext of his choice. Since such a response of the server can be used to simulate a decryption oracle, CCA security is nowadays considered the security standard PKE primitives should aim at. Most of the problems on which post-quantum cryptography is built on, for example lattices or coding theory, involve some kind of error correction. Therefore, most of the PKE schemes built from these problems do not satisfy the perfect decryption definition that I have previously stated. Moreover, since in such schemes the ability to correctly decrypt a given ciphertext depends on the secret key, another effect of not being able to decrypt can leak information about the secret key itself, as shown in various work. This problem can be naively addressed by increasing the parameters of the scheme. This reduces the decryption error at the cost of a bigger public key and ciphertext size. Another option is to reduce the error involved in the underlying mathematical problem. In this case, we end up with a public key encryption with lower security guarantees. Hence, we see that one faces different trade-offs between security, efficiency and decryption error. A new definition of correctness is therefore necessary to capture how far off such schemes are from perfect correctness. A first natural candidate was considered to be the probability over all randomness involved in the different algorithms that the encryption of the encryption of a random message does not equal the original message. Unfortunately, such definition cannot be used in a scenario like that of CCA security, where the adversary can actually be looked for ciphertext that have a higher probability of triggering a decryption error than average. One solution is to change the definition accordingly and to take the maximum or all possible messages in the message space. In this way, we obtain an upper bound for the possible decryption error of any single message. Now that all necessary definitions have been recalled, I can describe the compiler that we have designed. The main ideas behind the construction can be summarized as follows. Instead of trying to tweak or tune the parameters of a CCA PKE to achieve at the same time efficiency, security and low decryption error, one can do the following. We start from a PKE which is only CPA secure and which has a decryption error which may be non-negligible. We then improve the error rate by mean of a compiler which we constructed, and then, once the error is negligible, we can apply a standard transformation to obtain CCA security. So we see that there are two main ingredients of this construction. First, reducing the error to a negligible level and then improving the security of the scheme. Regarding the first part, we got inspiration from the work of Duark et al, which designed a compiler where the error is reduced by simply encrypting the same message multiple times under the same public key. Since the encryption is probabilistic, we will get different ciphertexts. During the decryption process, we decrypt all the different ciphertexts and apply a majority vote to decide which of the resulting plaintexts to return. Unfortunately, such compiler has two drawbacks. First of all, it does not preserve CCA security, so even if the underlying encryption was CCA secure, the one we obtained by means of this transformation is not, as mix and match kind of attack apply. This issue is also discussed in the original work of Duark, where a possible way to achieve CCA security is presented, but only in a very theoretical sense. A second drawback is that the decryption error does not scale optimally, as one has to rely on the majority vote to decide which message to return. In this table, one can see the estimated decryption error of the direct product compiler of Duark for different decryption errors of the underlying PKE and different number of parallel repetitions L. As one can see, the compiler with L equal to does not offer a better decryption error than the underlying PKE itself. This is because as soon as one ciphertext does not correctly decrypt, then we have no way of checking the correctness of the obtained plaintext and the only thing we can do is to rely on the majority vote. The second tool we need is then a generic transformation to boost the security of the scheme from CPA to CCA. One such transformation is the so-called Fujisaki Okamoto transform, or simply a full transform. This transformation, as shown by Off-Eins et al in TCC-17, can take as input a CPA secure PKE with negligible correctness error and outputs a CCA secure CAM. One nice characteristic of this transformation, also shown in the work of Off-Eins et al, is its modularity. Indeed, it can be seen as the composition of two more compact transformations. A first one, which randomizes the PKE by means of a random oracle, and which is usually denoted by T. And a second one, which converts the PKE into a CAM by encrypting a uniformly sampled message. In our work, we modify the first of these two transformations to integrate the ideas from the work compiler on how to reduce the decryption error. More explicitly, given a CPA secure PKE with a certain decryption error, we would first find the number of ciphertexts L necessary to achieve negligible correctness. We then encrypt the message L times in parallel, the randomizing the encryption, by using as randomness for each encryption the output of a random oracle on input the message and a different position of the ciphertext. Thanks to this randomization, it is possible to check the validity of the obtained plaintext by simply re-encrypting the different messages obtained during the decryption of the different ciphertexts. This allows us to not rely anymore on the majority vote, and in this way, we lower the decryption error from delta to delta to dl. Here it is showed in a more compact way how the encryption and decryption algorithms of this new PKE work. Once we have that, and once we are able to show that this new PKE satisfies certain security requirements needed by the FOD transform, we can apply the second transformation of the Fujisaki Okamoto transform to obtain a CCA secure cam as desired. In this table, one can see the comparison in runtime and bandwidth overhead among three different compilers. The Direct Product Compiler of Work, the Modified Throministic Direct Product Compiler that we have denoted C-STAR PD, where one substitutes the majority vote with a check by re-encryption, and the T-STAR compiler that we have just constructed. As it can be seen, our compiler outperforms the other transformations in all categories except the runtime overhead of the decryption algorithm, as we have to re-encrypt or obtain plaintext during decryption. As already discussed, this strategy allows us to need at least one less ciphertext to obtain the same decryption error, and thus to reduce the size of the overall ciphertext of the obtained cam. After designing this new compiler, we tested its performance with respect to the different candidates to the NIST post-quantum competition. The NIST is the American Institute responsible for standards in technology. In the past, tests run in important competition regarding different cryptography schemes, for example, one regarding block ciphers, which ended up with the adoption of the AIS as the new standard, and some concern with the development of new hash functions, which led to the design of SHA-1 and SHA-3. It is now running a competition to standardize post-quantum signatures and PKE or cam. We tried to measure how our compiler performs with respect to the submissions still running in this competition when we were working on the paper. The question we tried to answer was the following. How does increasing from one to L ciphertext compare to increasing the parameters at a comparable result in decryption error for existing run-to submissions in the NIST post-quantum competition? In other words, we look for submissions where a CPA and a CCA version of the same scheme were available, and compare the efficiency of the cam obtained via applying our compiler to the CPA version with respect to the CCA version of the same submission. At the time we were working on this project, the NIST post-quantum competition was still in the round two, and there were still 17 cams in the competition. Of those, seven were based on error-correcting codes of different types. And you can see them listed here. Of those seven, three had both a CPA and a CCA version. We applied our compiler and analyzed the results. Even though we had two different versions of the LedaCrypt submission, we could not test our compiler on it, as LedaCrypt is based on a deterministic PKE, and so our compiler cannot be applied. As one can see from this table, for almost all versions of the Rola submissions, our compiler outperformed the CCA version of the same submission. For example, we see that for the security level L1 and L3, the sum of sizes of ciphertext and public key of the compiled cam is smaller than the submitted CCA version. This is not true in the IS security level L5, where, however, our compiler obtains a much lower decryption error. Regarding the other submission, the results are slightly different. In the BIKE submission, we obtain a better runtime performance at the cost of a bigger size of public key plus ciphertext. This scenario is quite different if we look at the niche submissions which rely on the hardness of lattice related problems. In the round two, there were nine submissions based on lattices. Of those nine, three have bought a CPA and a CCA version. However, one of these improves the correctness error by simply narrowing the underlying error distribution, which, as I have already said, reduces both decryption error and security of the PKE. We therefore did not test our compiler on this submission. As far as the other two submissions are concerned, we can see from this table that all the relevant sizes get bigger and the only advantage given by our compiler in this setting is a smaller runtime of the different algorithms of the round five cam. In the paper, we also showed how the same ideas can be used to obtain the first post-quantum CCA secure bloom filter cam. BF cam is a primitive proposed recently by Darler et al. in Eurocrypt 18. Such scheme was shown to be a building block to construct fully forward secret zero RTT key exchange protocols. However, that work required perfect decryption of the underlying building block hence preventing post-quantum instantiations. We extended that line of work and showed how one can generically construct BF cams from any IBE, even from ones with non-negligible correctness error. This allows leading to the first instantiations of post-quantum CCA secure BF cams. To wrap up, we can summarize our funding as follows. We design a generic compiler to deal with decryption error from weaker schemes which are easier to design. All involved algorithms are easily parallelizable which allows to improve the runtime. Our approach gives very good results in the context of code-based schemes. Moreover, a similar approach can be used to construct the first post-quantum secure BF cam. Regarding further research directions, we are thinking about extending our analysis to other constructions to see on which kind of primitives our compiler offers the best performances. Moreover, it would be interesting to understand why there is such a strong difference in the performance depending on the underlying problem on which the primitive is based. And if this is something peculiar to the schemes we analyzed, there is an underlying common reason for such a result. Thank you for your attention.