 OK, good. So I wanted to introduce to you today crystals. So it's a cryptographic suit for algebraic lettuces. And it's a joint work with Shiba, Yoppe Boss, Leo Ducca, Igor Kiltz, Vadim Lyuachesky, John Cheng, Peter Schwabuin, Damien Stolle. So I will go with the motivation fast. But in the sense, the previous talk was motivation enough. And then I will talk about lettuces. And especially, this suit is actually based on model lettuces. And then I will present a key em. And then I will present some performance in the open quantum safe project. OK, so motivation. We just heard a 20-minute talk about what NIST is doing. So this project of standardizing something in the next seven years. And this talk will be actually on the lattice candidate. And we plan on submitting something to this NIST call. OK, so lattice cryptography, it's actually already out and already used. So for example, in Strong's one, Strong's one is an open source IPsec VPN solution. And true encrypt is in here since February 2014. Bliss, the Bliss signature scheme is in here since 2015. And New Hope, the key exchange mechanism since last October. Also, we were speaking about impact assessment. So Google just realized an impact about how New Hope would behave if we were doing a key exchange with New Hope and Elliptic Curve. And they let it run for a few months, collected a lot of data, and they actually conclude that they didn't find any unexpected impediment of deploying something like New Hope. But then they said, OK, we leave the standardization body and do their work. So in this talk, I will talk a little about a key exchange. So actually, we have this in TLS. What happens is that you have a client in a server and the client say, oh, hello, I want to speak to you. The server say, hello, back. This is my chain of certificate. Please verify. So I stripped away a lot of things in TLS. But of course, everything should be done correctly. Certificate should be checked. And there are a lot of other issues. What I'm concerned here about is this element. So the server is doing some computation. The server key exchange part. Then the client is doing some computation. The client key exchange, and this computing is key. And then the server is also computing the key. And at the end, they have a shared key, and they can exchange all of the application data under this shared key. So the first part is a public key part that is transmitting a secret key that then will be used in the application data. So if we want that to replace, for example, RSA, elliptic curves, or things like that, the question is, what should we put here? What post-quantum primitive, post-quantum encryption, for example, should we use in these boxes? So the first step here is the setup of the key M, so key encapsulation mechanism. And then there is an encapsulation itself and the decapsulation. So if we look at what exists based on lattices, so here I'm focusing on lattices, we have basically two main families. The first one is based on LWE, so for example, the candidate Frodo. You will hear a little more about these things at the next talk. So Frodo, the communication is 22 kilobytes, and it's based on this reconciliation mechanism that is a little more complicated to implement that basic encryption. And also you have new hopes, and you have received a lot of publicity, especially since Google experimented with it. There's also the BCNS15 scheme, and recently the author of New Hope did an encryption version of New Hope where they don't use reconciliation, but they still end up with similar communication. So what do people use a ring? So here RLWE, so R is actually for a ring. And the reason you can see is that it allows to decrease a lot of the communication. So if you look at LWE, you are actually working with matrices, plain matrices of integers. So one element in LWE will be a lot of element in ZQ. Whereas in RLWE, you are actually working over polynomials. So you're working with one polynomials, and you can actually write it as a matrix where the overcolons of this matrix are obtained by, for example, an anticyclic rotation. So you only need to give the first column and the other one can be recovered easily. So that's a way to do the multiplication. You can expand to a matrix, but you can also just do multiplication over polynomials. So that's a really nice saving in size. So in particular, instead of transmitting data with all these numbers, you're only transmitting n times less. So there are a lot less numbers. And usually, we work with this ring, that is the polynomials with coefficient module Q and mod xn plus 1. But there are some other possibilities, for example, xn minus 1 or xp minus x minus 1. So the second one, it gives a different algebraic object, and you can get good performances. But implementing the multiplication is a little more complicated. OK, so what is crystal? So here we're speaking about LWE and ring LWE. So in crystal, we're actually considering module LWE. So module will be something that is more or less in between. And we will focus on two main things for this cryptographic suit, that is simplicity and modularity. So we want to avoid reconciliation. We want to avoid, we were speaking just before about avoiding the complexity of the implementation so that people are not likely to do a lot of mistake when they implement. So we want to try to avoid Gaussian sampling. We want to try to avoid the entry assumption, for example. We want to provide CCA security from the start. And we want something where if you want another level of security, it should be easy to get. It shouldn't be like, OK, you need to re-implement everything because you have to change the module, so all your implementation is not working anymore. We want something where you want to increase the security, just increase this parameter, and it will work right away. And we can get that from modules. Also, the KEEM can be used for encryption, so in a KEEM DEM for key exchange or authenticated key exchange. So here I will present, I will explain a little about module lazuses. Then I will present Kyber, that is KEEM. So thank you for the name, Isis. CCA security, it's a CCA secure and it's encryption based. And I don't have time to speak about the digital signature that is called Dilithym. But the idea is we want to do something like GLP12, so it's a paper that was published at chess in 2012, where we're sticking to distribution of noises that are, for example, uniform. OK, so what about module lattices? So this is lattices we're working on when we're working with LWE, so it's full matrices over the queue. When we're working over ring lattices, we are only having one polynomial and the other column of this matrix are actually rotations or anticyclic rotations or they can be deduced from this first column. In module lattices, we're using, so an example could be, for example, to use a D times D matrix, where each element here is actually a small ring element, more smaller than this ring lattice, but they're all completely independent. So, for example, you could work with D-dimension module matrices of polynomial in Z, Q, X, co-chanted by X to the 256 plus one. And this dimension is fixed, for example, 256, and we choose 256 because we'll encrypt 256 bit at the end. But the nice thing is that you can consider many of these things and you can see that the random part is actually all the black part here and the other one can be derived. So it allows to reach, for example, all the dimension 256 times T, whereas if you were focusing on the ring lattice that is of the same form, you had to go from 256 to 512 to 1024 to 2048, and the gap between 1000 and 2000 is huge. It's usually also in security, like 1000, you'll get a good security, 2000, you'll get another kill security, like really, really a lot, and you would maybe want just to increase a little more. It allows to reduce also the modules, so we're working with smaller things, we can actually reduce the modules, so it means we can work with smaller numbers, and it's more flexible because you can see here that it's this form. If I want to increase security instead of considering the three metrics, I can consider 4.4 matrices, and in the implementation it will be able, it will be easy to change. So what is the assumption? The assumption is the learning referrer, but of these modules. So we have a matrix A, so this matrix A, sorry. It's a matrix here, so it's a matrix of polynomials, and you multiply by a vector of polynomials, you add a noise that is small, so small polynomials, and it gives you a value. So that's the learning referrer, and if we restrict to small secrets, we can actually get a square matrix here, and we get this form here, and we'll need small secrets in order to do a key exchange. So this is, I want to stress something. This is not a revolutionary technique. This scheme is the scheme of reggae from 2005. It's the fact that we can take it with square matrices, it's because there is this equivalence from 2009. The ring things were considered in 2010. The module thing is just a generalization of the ring thing, so it was only written in 2014, but it was already considered before. We are not reinventing the wheel here, and I think it's important for the fact that we want standardization to be successful. So the module learning referrer, the decisional version of the problem is, can you distinguish between something completely uniform with respect to something uniform, and the last one is actually obtained as this, as a module LW sample. So an interesting point that I want to make here is that actually when you think about it, it will not be less efficient than ring LWE. So first, if you want to derive the matrix A, you can actually derive it from one seed. Okay, you have, for example, 3.3 polynomials to derive, so you're just having one seed, you fit it in an extendable function, and it gives you a lot of values, and it fits your matrix. So for example, you do shake 128 on the seed, and you'll recover all these matrix. But the key point here is that we're actually, so we're computing this thing here for the public key. So we're computing this matrix product times the secret plus a small error, and you get value B. And when we'll transmit the key, we'll actually transmit the seed, so it's only 256 bits, and then we'll transmit this part here. But as you can see this part, it's actually, it will be exactly the same number of elements as a ring element. So it means we're not compared to ring LWE for the communication side, we're actually not complicifying the thing. It will not be heavier to use module in that respect. But it will be easier for majority, and we actually have more like less structure. So we have a little also more multiplication of polynomials, but it's multiplication with smaller polynomials. So in a certain way, it will balance a little bit. And the resulting element size is the same size as a ring LWE element of size 256 times T. So what it means also, because at the end we'll do the cross product of this with the secret, and it will give you one polynomial. And this polynomial will be of dimension 256, so it means we're optimal to encrypt this 256 bit. So actually, in general, module LWE is not more efficient than ring LWE, but here we're encrypting 256 bits, so it will be. Okay, so what about the implementation? I said it was easy. So for example, here we're storing the vectors in this polyvector struct. So how do we compute the entity of this vector? So we're just doing a loop with Kiber D. Kiber D is the parameter, the defined parameter D, and we're just doing the entity on the small elements. And it's an entity of 256 elements. So also it will be easy to increase security because with very little implementation in the sense that, so if I don't modify any parameter but D, and I increase D by one, you will see that I gain a lot of security. I lose a little on the fact that I have a little more decryption error here, so if I want to have the same level of decryption error, I will have to increase the noise a little bit. But otherwise, if I just increase D and take one more, I will gain a lot of security. So that's really nice to, for example, reach different security targets as it has been asked for by NIST, where you can say, okay, I recommend the version with 3.3, but you can also consider 4.4 and you'll get much more security. So what is the key encapsulation mechanism? It's more or less what's on this slide. So I have this, the server is actually doing the setup, so it's creating a public key and a secret key and sending the seed of this public key and sending this element here that is the same size as a ring element as a public key. And the user is doing the encapsulation, so is drawing something, is drawing a new secret key and a new noise, is doing this multiplication. And here is having, is drawing a key completely at random, is encrypting the key and putting it in a ciphertext and then they send the ciphertext back to the server. The server can use a secret key to the crypt and recover a noisy version of the key and then if they round things correctly, so here this thing is actually q divided by two times the key, so if you look at the round of two divided by q times the key, you will get the same thing as here because here is with a little more noise, so when you divide by q and you round it, the noise will disappear, so you have the same way of rounding. And this is exactly that the regaff scheme from the beginning, that we know from 2005. And this thing here will be easier to implement than reconciliation mechanism. So if we look at the encryption scheme, what it is, so I draw a seed, I use shake in order to generate my matrix of polynomials. I draw some vectors of small polynomials, so small polynomials actually the coefficient will be drawn according to a binomial distribution, so you're just like adding a few bits, so you're drawing something at random and you're adding the bits and it gives you exactly the coefficients here. Then you're computing B as A times S plus E and the public key is the seed and B, the secret key is S. So then, how do you encrypt? So in order to encrypt your, so you encrypt using the public key, you encrypt a message that is 256 bit long and you're using some random coins. And I'm specifying the coins here because we're using a CCA transformation and we want actually in the end, the server will recompute the ciphertext and verify that it has not been modified. So you get the key, you get the seed and B, you can reconstruct A, then you can generate S and E as a new polynomials with small elements using these coins here and you can compute a transpose of S prime times A plus E prime, so that's one element and so that was these elements here, the three first elements of the matrix and then you can compute the fourth element of the matrix, sorry, the fourth element is you're using B, you're computing B and the inner product of S prime and you're adding a new noise and you're actually encrypting, so you're adding the message here, so you're adding the message in the coefficients. Then you're putting that as a ciphertext and to decrypt, you're just computing V minus the scalar product of U and S, so this will remove S prime transpose A times S and you'll recover actually this thing here plus a noise, so then you have this simple decryption procedure and you will be able to recover the MIs. So if you look at the key M, what it does is that the generation by the server is just generating a public key and a secret key, sending the public key over, the encapsulation you're drawing some randomness, you're applying SHA-256 first because you don't want to reveal the randomness of your computer and then you apply SHA-512 and you get like the first part will be the key and the second part will be the coins. So then you encrypt with these coins, you encrypt the value X, so it means during decryption you'll be able to recover X and in order to recover key later, you will be reapplying SHA-512. So then you send your ciphertext, so you're sending U and sorry, this shouldn't be here, this C here shouldn't be here. So you're sending U and V and here you recover X prime, you recover K prime and cons prime and then you re-encrypt and you verify that the re-encryption is actually the same as the ciphertext that you received. So here we're working with dimension 256, so you're we're canceling 3.3 matrices. The polynomials have a small elements and the elements are drawn according to this binomial distribution and it's really easy to sample elements according to this distribution and also the modulus is smaller for example than in New Hope, it's only 13 bit long. So if you think about the implementation aspect, so entity in dimension 256 and the really nice thing is that if we want to increase security, we just change T and we keep the same entity. The primitive use, so we're using Sheik, we're using SHA-3256 and SHA-3-512, we're trying to be consistent and using like only one big family here. There's this binomial error distribution, it's essentially the same as in New Hope, it's actually smaller, it's actually simpler. We're doing a lot of compression in order to transmit as little as possible and what I want to stress out is that it's actually, the code is actually similar and the performance will be really similar to New Hope and New Hope simple. So this scheme can actually be used like in New Hope, but it's actually much more general and you can use it for example in encryption, so as a key M, D, M, or you can use it as an authenticated key exchange. So can I see the code soon? So the reason is that we were groups that were not working together at first and then we came together. We still have a couple of things to agree on with respect to the quantum random miracle model and the CCA transformation, so we expect a really similar performance compared to the code, I will talk about the performance very soon, but we already have this GitHub account that this does not yield to anything, so PicoCrystal is created, but as soon as it's available, we'll put the implementation there. So last, if I have some time left, yes. I wanted to mention the open quantum safe project because when I wanted to assess the performance, it was really nice to have this project. So the open quantum safe project is a project where the leaders are Michele Mosca and Douglas Stabella and it's actually a project where you have an open quantum safe library and that is a wrapper, an API, over the existing implementation of some primitives and then there's also an open SSL integration of this library into open SSL 1.0.2. So in particular, if I build everything right now and I do open SSL speed, I get this different post-quantum key action that are already in the library. So here it's isogenic based, you see here it's code based and here it's lattice based and you can see that for each you have different trade-offs between like communications or timings here. So what about our scheme? So I just put the scheme directly, our implementation directly in this project. When I compare, we have really similar timings to New Hope right now. The only difference is that at the end, the encapsulation is there's a little more work because we're re-encrypting in order to verify so this is a little more costly. But the very nice thing is actually we have the communication requirement compared to New Hope. So it's actually two times smaller and the reason is that yes, we're using, at the end we're transmitting something and it's really encrypting 2056 bits and you're really using a lot of compression. So here just a comment about security. So there is this discussion about security as René just told us the talk before. This does not mean that there is an attack in exactly this number of operation over the schemes. What happens is that these are security estimates where you apply the known classical and the non-quantum algorithm and they're actually really complicated so you take like one of the smallest elements that is really hard in that and you say okay, let's assess the security of that and the security of this is the best algorithm that we know today, they can reach that. But actually this is really pessimistic. So in a sense that if we really want to have the security number of cycle, it would be much, much more than that. So one nice thing is that for this project, for this open quantum safe is that you can do pull requests and really I hope a lot of you will contribute to that. This is a really nice thing in order to compare. You can just do a wrapper around your implementation and it will work right away. So as soon as we put the code out, we'll do a pull request on this open quantum safe project. So to conclude, we're using module lettuces because they are modular and we really focus on easiness of implementation and simplicity. So Kyber is a key encapsulation mechanism that is nearly as fast as you hope and is helping the communication and also we want CCA security by default. So it means it can actually be used in more cases than you hope. So it can be used in authenticated key exchange and key MDM or also in a long-term case, a long-term case. So there can be some key reuse. Also we have this signature, so it will come a little after. So the idea here is we want to try to avoid complexity. So let's take the same modules and let's base it, let's take the same elements and let's try to build something that is similar with uniform noise and things like that. So just to, that's a conclusion in order to not take a slot for the quick lighting session. We have some internships with the people in our group, so please feel free to contact us. Thank you. Great, very nice. I guess we have time for one or two questions and then we have a break. Can you go to slide three or four I think on the key exchanges, please. This one? This one, yes, the one before, please. Oh, okay. So here you mentioned the paper gin tidying here and what does that paper have to do with all this key exchange? Can you explain to us? So in this paper, it's actually a key exchange steam where they're speaking a little about reconciliation. So I wanted to explain that this reconciliation is actually a describable little here, it's also a describable little in the New Hope paper. So it's actually a big... But here I disagree. I think that this new country considering techniques invented by me, which is gin tidying in 2011. I don't think they invented, and Chris's pocket paper is much later than mine. It's 2014. It's later, yeah. Yes, so in my opinion, all those key exchanges here are variants of what I did. So I just want to show you something else. Like this scheme here, that is like the encryption scheme that will be used on this learning refer. I'm referring a lot of papers, not only the first one and the reason is... I'm not talking about... No, but you're saying... It's a good comment. It's a good comment. I'm sure you're gonna take it into account. Thanks for the comment. Okay, thank you. So just a small thing. So the server is now generating the seed that generates A. Yes. And conceivably, they could be compelled to use a particular A or something. So is there a reason that when you generate the A, generate the A matrix from this seed, that it's hard to find one that's a product of two... That could be written as a product of two smaller things. There was some attack in that. So we're fitting that into shake, right? And shake is expanding the seed. And actually, if you believe in the security of shake, it will be really hard to make it so that it's a product. So it has nothing to do with shake. The question is just about the properties of... It's a much more basic question, just about the properties of these matrices A that four to one of these matrices to be written as a product of two smaller things is unlikely, yes? Yes. Okay. I have a question about the computation of your security estimates. The very last slide, or second last slide. When you... So the number we're looking at there, is that the estimated cost in terms of an SVP problem of some dimension? Yes. How do you compare ring LWE to LWE in particular? How do you take into account the fact you're using a cyclotomic ring instead of a more general ring? Okay. So what we're... So... Okay, what we're doing actually here, so it's in order to attack this ring LWE, you're often coming back to the full matrix and you're like looking at reducing it and you're looking at small vectors in this matrix. So here it means that you're actually expanding this ring LWE into a bigger matrix, right? And module LWE also expanding in the bigger matrix. And you're looking at this and you're trying to reduce it and you're applying VKZ in dimension B. In order for the attack to succeed, we need at least this block size and the SVP in this block size that is a smaller operation that we'll need to do is actually giving this cost. But you need to do much more SVP and then you do it, you do several rounds and you do it on... And this is like the smaller dimension possible, but actually these dimension are huge. And the reason is that in these schemes, most of the time, the lattice are already quite reduced and if you want to reduce them more, you actually have to work a lot. So this is just looking at the smaller block steps for the best known algorithms in each one of the cases and estimating from that. So you mean in the module version? No, I mean that's how you got those numbers there for the security events. It's SVP over the matrix. Hi, I just wanted to say the obvious which is that we are interested in the schemes for things that need to be secured in 20, 30 years. So the security estimates should consider security in 20, 30 years, which of course we have no idea, but definitely these ones probably are too non-conservative, right? I actually think they're conservative in the sense... They are for... Yes, in the... So the one who will claim in the full version will be conservative. The reason is that we'll consider the best things then we'll be like this is now the best plausible attack and this attack is actually a really small element of a much larger attack that you will need to perform. So it's mean you actually gain nearly 40 bits of like doing all the SVPs that you need in order to reduce the lattice. But I agree that we'll need conservative estimates here.