 was useless. Thank you Leo. So NTRU is a crypto system which was published in 1998 and it uses some rings which is essentially a cyclodermic ring and it was quickly attacked later with using lattice reduction and after the attacks were finally understood but NTRU was used with a much larger modules for building some more exotic crypto but then the previous years of this attack were published and they decreased massively the security of this overstructured NTRU. So we give in the paper another subfield attack, but we also discussed the actual behavior of straightforward lattice reduction, which is just as fast so I will just cover this one here. So first the toy crypto system, so it's NTRU without number theories, so it's RU. So the toy crypto system so we sample some small random invertible matrices, F and G of dimension N and it will be the secret key. The public key is H, which is the inverse of F times G, but some Q, a small integer and if you want to encrypt then you sample some small vector S and E and you send HS plus E which is essentially an LWE crypto system here and if you want to decrypt you multiply on the left by F and due to the definition of H then this is equal to G times S plus F times E and you can see that since J and S are small matrices and vectors and the same for F and E then this is small. So in this way we can have a secure Gaussian channel and therefore we can communicate reliably so there are a few details here that I will hide because they don't matter for the rest of the talk. So now I will present the Coppers-Smith-May attack. So we have F times H so due to our definition this is equal to G plus an integer matrix K times Q and so we can consider this lattice and so you have this lattice it is made so that it contains a sub lattice which is of high rank but low volume so why this is a sub lattice you just multiply by this vector so the bottom row is obvious and the upper row is just given by the equation above transpose. So we also have that any short vector of this lattice with a law to distinguish a message from uniform and so we can run lattice reduction on this basis such as dbKz and the question is when does it work of course. So we can make a simple analysis of this algorithm so the vector given with the complexity 2 to the b is of size bounded by this so square root of Q is the determinant to the power 1 over the dimension and the second part is the approximation factor. So if we want a polynomial algorithm we must have a beta which is small and it forces modulus which is almost exponential but this is lower bound tight it's unclear and so in order to answer this question we are following other lattice reduction words. So in all experiments that you can make is a lattice reduction that takes a small volume sub lattice which is hidden or it behaves as if the lattice were random and therefore if lattice reduction fails you can reliably predict all the Gram-Schmidt norms of the output basis. So how to distinguish the two cases then? Well this property after lattice reduction as a basis is as random as possible but no random so this means that we have for example this theorem which says that the shortest vector of the lattice which is non-zero must be larger than the minimum of the Gram-Schmidt norms. So this theorem must be followed by the output so if this is not then we conclude that the lattice reduction fails and therefore we find the secret. But is the converse almost true? And so is no. So we have this main lattice property that we use for our algorithm and this is due to Patekian-Turall. So if you have a matrix and you an integer matrix with our columns then the volume of the product is must be larger than the product of the our smallest Gram-Schmidt norms of A. So if you take i equal 1 then this gives the previous theorem. And the proof is as follows. So we can say that A is triangular due to Gram-Schmidt organization. We can also take U which is in Hermit normal form because we can multiply it on the right by any integer invertible matrix without changing the theorem. And so this means that we can have U in this form with any rows of star in it. Any number every time. So now you can consider the Gram-Schmidt norms of the of A U. So the first one will be larger than the index corresponding to the entry V0 which is larger than 1. And the second for the second vector then the organization will change the value corresponding to the top of U but not the bottom. So we know that the Gram-Schmidt norm is larger than the index than the Gram-Schmidt norm of A with index V1 and it is the same for those. And finally the volume of the lattice A U is the product of the Gram-Schmidt norms. Okay, so now in our case we can show that the volume of the sub lattice of rank n is bounded by n to the n. This is simply the Adama bound. And due to LLL reduction, we know that Gram-Schmidt norm decrease at most by a constant factor. So in case of failure, we expect Gram-Schmidt norm to be constant and equal to Q followed by a geometric decrease and then 1. And therefore the last n values consist which are the smallest consist of a geometric decrease starting at square root of Q followed by some 1. And it's very simple to compute the product. This is this formula. And then we will remark that the log Q appears in the square which appears. So this is too large for Q. It is around 2 to the square root of n. So this is much smaller than what we had before. So it's very easy to determine at some particular minimum block size for butting a rule instance. And we make the same deviation in the paper for LWE, also ring equivalent. And we use the dual BKZ algorithm which is the fastest today. And it appears that RU is weaker than LWE for Q which are larger than n to the 2.78. So what we have for large Q is that it's weaker when the noise is up to square root of Q divided by nQ which is not optimal. So can we prove this behavior because the previous claim where your sticks turns out that a bit, yes. So if the first vector has a norm smaller than square root of Q, then this is a success. We can efficiently decode, can efficiently distinguish. So we can assume otherwise. And now among the n smallest Gram-Schmidt norms, there are K which are less than square root of Q. We let M equal to log alpha of square root of Q which is just the length of the decrease. And we consider the first L which is the minimum of KM first of this. But now Gram-Schmidt norm can't quickly decrease. So they are larger than Q to the L over 2 times alpha to the minus L square divided by 2. And so the total product, so the other ones that we did not consider are this larger than square root of Q. And the total product is larger than this because all the Gram-Schmidt norm are larger than once, the property due to LLA. But then we have a contradiction with our property and the product of the n smallest vectors of the sublattice with a small volume. However, the condition that Gram-Schmidt norms are larger than one is incorrect for the dual BKZ algorithm. So we do not prove that we recover the full sublattice. So we can recover a constant fraction times n vectors in the sublattice. But we don't know actually we need a much larger Q if you want to recover all of them. So this is a bit incomplete. So can we prove that we do recover the full sublattice? This will be quite interesting for theoretical purposes. This seems to be quite nice problem that was not studied before apparently. An alternative way of having an equivalent theorem is to prove some kind of Johnson-Linner source lemma for lattices. So this means that if you want to find the smallest AX mod Q with some very tall matrix A, then you can just consider some Q lines and hopefully this will work. So if we have this kind of theorem then we can show that our algorithm will work easily. But we don't know how to do that. Another possibility is to, right now, all previous algorithms were made for minimizing the shortest vector found. And so this is a new problem and so there was one paper which is a bit on this subject. But I mean there is hope that we can do better than just applying a well-known algorithm for minimizing the shortest vector and solve this other problem efficiently. And another open problem is what is the set of Gram-Schmidt norms of all other bases given that the Gram-Schmidt norms of one basis. So because the whole point of the algorithm is that with this new constraint then we force LLL to work. So it would be nice to show that this constraint describes the whole set. It seems that in practice these constraints are tight as far as our experiment on N true go. But maybe it's not the case. So in conclusion, lattice prediction is not some kind of dark magic that works on encrypts. But it's an experimental science. And cryptosystem which can be quickly broken with LLL were proposed. So there was one example of homomorphic encryption scheme. The proposed computation took around one day and we took also around one day to compute the secret key. And so this speaks volume about the lack of experiments made in this area, I think. Also there is no reason to prefer N true to ring LWE or root to LWE for public key or key exchange. So don't change the rule. So ring LWE and LWE have more security and they don't have any structure. So you should prefer them and everything else goes nicely. All techniques works for both cases. As far as more exotic cryptos like IBE signatures and so on, they are not yet threatened because our limit exponent was around 22.8. We need to go much lower like 1.5 I think to threaten anything. But in the general case, adding structure is bad with respect to longevity of cryptosystems. So if you absolutely want to do it, then you should compare with alternatives based on ring LWE, I think. Also you should cryptonize N true or root. And as far as cryptonizing is concerned, you can almost forget about ring LWE. In particular because we have a reduction between the two. And also do not use the root amide factor because I did not use it. And if you use it, you will arrive at a very low root amide factor which is way below what you can expect with LLA. So if you want to express the security of some lattice cryptosystems, then you can just give the block size that you need to in order to destroy the scheme. So thanks for your attention. It's Pierre Vaillemin. Thank you. Do we have any questions for the speaker? Okay. Maybe I have one. So is there any impact of this attack on LWE or ring LWE problems as far as you can tell? No, not at all. The reason is that if you use the equivalent lattice for ring LWE, then you will have only, well, either you can consider a CVP problem or BDD problem. But then you can choose this technique. Or you can embed one vector, but then you just have a sublattice of rank one which is of low volume. And so here you need N for it to work nicely. And finally, you can always embed the whole ring. But then you have a three N matrix of dimensions three N. And you have that last two N, Gram-Schmidt norm, I equal to one. So I think the N smallest is much more difficult. So I think the product of the N smallest larger than one is much more difficult than having just one, actually, larger than one. So there is no impact whatsoever on any ring LWE or LWE cryptosystem. Okay, so let's thanks to speak. Oh, sorry, Martin. We do have a lot of time. Go ahead. Okay, and I got slowed down. Question, what dimension of LLL? Kind of so they're quickly broken by LLL. Can you just expand on that a little bit? What exactly? What dimension of LLL that you need to run when you say kind of like proposed cryptosystems, you run some experiments like what kind of LLL instances did you look at to break the particular instance? Go back to your last slide. My last slide. Very last. Yeah, you say cryptosystems which can quickly broken with LLL were proposed. Yes. All I'm asking is how big were these LLL instances? So the biggest one, I think it was, we needed to run it in dimension 1000 and around 1000 bits in the entries. Now a bit smaller one. So it just takes quite some time, but it's one day of computation, so quite reasonable. More questions? Nicola. Yes, I have one small question. You said do not use the root hermit factor, but overall, so you still use the root hermit factor, but just not in the full lattice, but in sublattices. No, what you can do is simulate the behavior of your lattice reduction algorithm so we can compute exactly what will be the Gram-Schmidt norm. And so this is not a slope, a simple line. So what you have is actually a line followed by a parabola if you use the dual BKZ algorithm. And so if you use the root hermit factor, then you're actually assuming that it will be a line. And it's not the case in particular when you have a large box size, so if you want to attack entry or ring LWE, it's not at all a good approximation.