 Thank you. So let me start with some context. So as you know, quantum computers will be able to solve hard problems, such as factorization and discrete logarithm. This means that standard public key crypto system will be broken. And as a result, the NIST, the NIST agency in the US, launched the process to standardize one or several post-quantum encryption scheme and signature scheme as well. So roughly 60 proposals were received at the first deadline in winter 2017. And most of them, we can sort them out in three categories, lattice-based, code-based, and multivariate-based. Today, we will focus mainly on the lattice-based. And in turn, each of these proposal comes into flavors in NCPA1 and NCCA1, which is usually the Fujizaki Okamoto transform of the NCPA1. Now, the thing is that the NCPA variant does not allow for key reuse. That means that for each new message, we must pick a new secret key. But it's also simpler and more efficient. So we think people will try to use it. And they won't necessarily pay attention to the technical details like not reusing the key. And in addition, if the Fujizaki Okamoto transform is badly implemented in the NCCA scheme, then it can leak some information about the underlying NCPA scheme. So the question is, what happens if a key is reused? After how many reuses, can we recover the secret key, or at least a large part of it? So we noticed that most lattice-based scheme share a similar structure. So we designed an abstraction of this structure that we call the meta-PKC framework. So what is this framework? We work with six additive, a billion groups, SASB, SSK, ST, SU, SV, and four billionaire mappings that we denote with this cross here. Between some of these groups, it can be polynomial multiplication, or my matrix multiplication, for instance. Now, the public key is a couple of two values, A and B, with B being A times the secret key plus D. Where the secret key and D are picked randomly, but small, we'll see what does it mean. The encryption of the plaintext, it follows this complicated expression here. It's not so important for us today, but note that the plaintext is encoded before encryption. And in the decryption process, we receive the two ciphertexts, U and V. And first, we compute V minus U times the secret key, which is actually equals to the encoding of the plaintext plus some noise delta. And then we apply the decoding function on this value. So it follows that the decryption is correct if and only if the noise delta is small. So small means here that the norm of delta is smaller or equal than rho, or some threshold. And the norm can be a parameter. Now let's consider the model we considered for our attacks. So this is a real-life setting where our clients want to communicate securely with the server. So if the client is honest, he can request the public key from the server, and he can generate some key material k. He encrypts the key material, which results in two ciphertexts, U and V. And then he can send U and V to the server that decrypts then after some steps, maybe they can derive some symmetric key, and they can establish a secure channel. Now if the client is malicious, instead of sending U and V, he can try to send U and V plus some value X to the server. And after some steps, they will try to communicate. But if the communication attempt is not successful, it means that the decryption on the server side didn't give the right result. It didn't give the key material k. So that's what we call the plaintext checking attack. So more formally, the server at decryption will compute the code of delta plus the value X, X plus the encoding of the key material. And this means that if the communication attempt is successful, it means that the norm of delta plus X is smaller or equal than rho. And otherwise, it means that the norm of delta plus X is bigger than rho. So for the adversary, this is the same as having access to some oracle, which returns whether the norm of delta plus X is smaller or equal than rho or not. From there, we can define a learning game where the goal of the adversary is to find this noise delta, given the access to the oracle of X. Now we designed several learning algorithms for different norms, like the hamming weight, the L infinity norms, the L1 norms, for example, where we assume we work in zq to the n, or some isomorphic groups. We can work, for example, with polynomials from a polynomial ring. So here is a small example where we consider delta in zq rho equals q over 8. And delta is in zq, but we consider the value between minus q over 2 and q over 2. And after some computation, we find that if we query rho of minus X minus rho, and we obtain 1, this is actually equivalent as having delta greater or equal than X. So by varying this value X, we can design, say, your binary search or a cut and choose algorithm to find the noise delta in at most log of rho oracle queries. So we designed several algorithm based on the same idea. And for all the norms we considered, all the schemes we considered, we can recover delta in at most rho of n log q queries. Now given the noise delta, how can we recover the secret key? Actually, the noise depends on the secret key by this expression here, where the only unknowns for the adversary is the value of the secret key, of course, and the value d. But we know we can replace d by using the second equation here, b equals a times the secret key plus d. So this means that we can recover a system of linear equations in the secret key. So we can learn delta. And if delta is in zq to the power nv, it means that we get nv equations in the secret key. And we can repeat this process k times, let's say, until we have n of equations to solve for all the components of the secret key. So then we can solve this system here, where we assume that we work in some algebra where we can solve this type of equation. But it's always the case for the cryptosystems we considered. So this means that learning the noise delta is sufficient to recover the key in this key or PCA model. Now let's consider some quantum key recovery. So we wanted to see how well the schemes resisted to the power of quantum computation, and more particularly, about quantum chosen ciphertext attacks. So obviously, post-quantum cryptosystem should resist the power of quantum computers. So usually, we consider this model here where the adversary or the parties are quantum. But the communication between the two is classical. But now what about a free quantum setting? So in this setting, everything is quantum, including the communication. So that means that we can, for example, send some superposition over the channel. So in this case, in a quantum chosen ciphertext attack, instead of querying the ciphertext and getting one corresponding plaintext, we can query a superposition of ciphertext and a second register to get the output. And then in the output, we get the superposition of ciphertext and the corresponding decryption in the second register. That's pretty much the model we consider. As you know, classical learning with our samples look like this, with a couple of values A and A times some secret plus the noise. And the goal of the problem is to find a secret key given several of these samples. So now we can consider a quantum LW superposition where we get a superposition of LW samples, for example, for every possible value of A. And it turns out that there is an efficient algorithm to recover the secret key given such an input designed by Grillo, Karenidis, and Zistra with some that can recover the secret key with a good probability. But the problem is that in this chosen ciphertext attack model, we get this type of superposition here. So the challenge is to convert this quantum state into this quantum LW superposition given in equation 1. So that's pretty much what our attack does. So again, we assume we work with vectors with components in the Q. If we work with polynomials, we can take the trivial representation of this polynomial in the Q to the n. So I will go quickly through the attacks. You can find all the details in the paper. So first, we prepare the ciphertext superposition as in the previous slide. And then we call the decryption oracle. So in the third register, we get the corresponding plain text. Then in the fourth register, we compute v minus the encoding of the plain text. And we take some subset of that. Usually, we take only one component. And if we do the computation, it is actually equals to the same subset of u times the secret key plus some noise psi. So it looks like a noisy sample. Then we call the decryption oracle again. So we can clear the third register. This step is important to improve the probability of measurement at the end of the algorithm. And so we get this last quantum state, which is actually like quantum LWE superposition that we wanted. Then we can apply the GKZ step, which means applying some quantum Fourier transform on the first and the fourth registers. And then we can measure, as in all quantum algorithms. And it turns out that after measurement, we obtain the coefficients of nu equations of the secret key, with probability, roughly, 1 over q. For all the proposals we considered, except for new hope, where the probability is 1 over q to the square. In another independent and concurrent work, Aladzic et al. presented another quantum key RCCA based on the Bernstein-Vazirani algorithm. But they require stronger assumptions. First, that the quantum oracle computes an addition in the second register, so it adds the second register with the decryption instead of just XORing. And secondly, the decryption of one element must be of the following form. It's the inner product of subset of the secret key with a subset of the ciphertext. And then in the decoding phase, we cut zq into c intervals. We see some parameters. And we output the corresponding interval. So for example, if the inner product is between q over c and 2q over c minus 1, we output 1. So these are the strong assumptions. And we adopted this attack to the next first one, submissions fitting into our meta-PKC framework, as we did for the other attacks. So I'm just giving the results here. And for most proposals, the probability to recover the secret key with one quantum oracle query is at least 0.4. And we designed a variant of this algorithm for Nihope. And we get even better probability of 0.6 to recover the secret key. So we see that the assumption is stronger, but the results are better. So these are the schemes we considered. Frodo and Nihope are the two that pass to the second round of the next process. U is the number of unknowns. Typically, if the secret key is some vectors, it's the number of components in the vector. O is the number of oracle calls we need to obtain E equations with probability P. And T is the expected number of queries needed to recover the secret key. So here are the results for the classical attack. First, we see that there is no result for lizard and Nihope. The reason is that they use some decompression function at the encryption. So it means that the components are multiplied before the decryption. And that mitigates our attack because we lose the fine-grain control we had over this value x. But other attacks are still possible, as shown by Baratol and Kinetol in some recent papers, where they can recover, I think, the secret key with good probability with a few thousand of queries. So we see that with our attack, we can recover the secret key with a few thousand of queries. And the efficiency of the attack depends mainly on the value u, so the number of unknowns, which kind of makes sense. Now here are the results for the first quantum attack, our quantum attack. There is no result for Kinet because it doesn't apparently fit into our MetaPKC framework. And lepton because it uses some error correcting code. So intuitively, for some random pair of ciphertext UNV, the probability to get an error at the decoding is extremely high. So in the superposition, in a lot of the states of the superposition, we will have only error codes and no information about the secret key. So we lose actually the point of quantum computers and the point of quantum superposition. Now the probability of success is proportional to 1 over q, with q often quite large. And that explains that the expected total number of queries is quite big, actually. And the result are not much better in this column than in the classical attack. But the interpretation is a bit different because here with two oracle codes, we have a non-negligible probability to recover the secret key. Whereas in the classical attack, we always need to do a thousand of queries to recover the key. And as we expected, the age of attack works extremely well for most of the schemes. Only for lizard and lotus, where we have more than one or two expected queries. Now as a final remark, I want to say that the total cost, so the last column t, is not necessarily representative of the resilience of a scheme. So for example, if we take the GKZ-based attack against Frodo, we can actually recover a column of the secret key. So the secret key is a matrix. We can recover a column of it with only an expected number of queries to the 14. And with this column, we can already decrypt part of the ciphertext and maybe recover the entire secret key by other means. So let me conclude. We've seen that learning the noise delta is sufficient to recover the key in this key recovery plaintext checking attack model. As only a few thousand queries are needed to do so. We also applied two quantum QRCC attack and we've seen that with one or two quantum oracle calls, we can recover the secret key with a non-negligible probability. And finally, some design choices can mitigate the attacks but at the expense of the efficiency. So for example, we can increase the number of unknowns or we can increase the value Q or we can maybe use some error correcting calls as in leptan and it mitigates the quantum attacks, at least the quantum attacks we consider. That's all for me. I have time for questions. So these are a few thousand queries that's like the n log Q value that you mentioned. Like you need n log Q queries. So what is n in that? What is what? What is n? So because you said that you need n log Q oracle query. Or n log Q, if, so n log Q, it's actually. Because I mean this thousand queries, it comes from n log Q, right? Yes. So Q. So usually it's a nv log Q query. So if delta is in zQ to the power nv, so we do nv log Q queries. So there's no, so you said, okay, that one mitigation would be to increase Q. Couldn't you also increase n maybe? Or n is something that's like. N is nv actually in this slide. Okay, then nv. Well, anyway, but when you do this oracle query, when you do this like queries, do you have like, is there a neoristic to choose them? The values of x or is it, are they just random? Or does it depends on the norm because you said it works for any norm? So actually there are different algorithms for different norms. But of course, if we know, for example, the distribution of delta, we can optimize which value of x we take and the expected number of queries will be smaller. Thanks. Any more questions? So you said that to recover the secret key, you actually need to solve these equations in some particular algebras. But you said, I mean, you hinted that for some algebras, this may not be solvable. Is this the case? Yeah, I think so. I have no example, but for, I mean for polynomials and matrix multiplication, it works. So I don't have any example actually. No more questions? So let's thank all speakers of this session again.