 So I'm not going through that Okay, so let me show you one picture of this talk. So what we are going to discuss is trade-offs between the time and approximation factor we can get for Solving for finding sharp vectors in ideal lattices. I'm going to define everything later. Let me just present to the big picture So if you don't care about ideal lattices if you want an algorithm for any lattice You have the BKT algorithm on the left which gives you some trade-offs if you restrict yourselves to ideal lattices You have a CDW algorithm which improves upon the BKT algorithm in the quantum setting and what I want to discuss in this talk is Kind of an extension of the CDW algorithm where we obtain all the trade-offs in the quantum setting and some improvement also in the classical setting and So yeah, I should mention that the picture on the right is only for prime power cyclotomic field So we can extend it to other fields, but the picture is slightly different Okay, so let me start with defining what I'm going to talk about. So a lattice is a vector space kind of a vector space a discrete vector space of RZ And it's represented by basis So here is a lattice in dimension 2 and the point of the lattice are Integer-linear combination of the vector of the basis. So you have an example of a lattice with two different basis Okay, so SVP stands for shortest vector problem And it's the prime given a basis of a lattice to find a shortest non-zero vector of the lattice And I'm going to write lambda 1 the Euclidean norm of the shortest of a shortest vector So I'm going to be interested in the approximation variant of this problem Where I just want to find a vector Which is at most Let's say for instance two times lambda 1 or some approximation factor times the length of the shortest vector I'm also going to discuss about CVP so CVP stands for closest vector problem and it's given a target in the real space Spine by the lattice we want to find the point of the lattice which is the closest possible to the target and Again, we have an approximation variant of this problem We just want to find a point at some distance of the target Okay, so why do we care about SVP and CVP in lattices? It's because when you want small approximation factor It's a conjecture to be hard even in a quantum setting and so it can be used to build Cryptographic primitives and so let me recall you what I've shown in the first slide So if we want an algorithm that works for any lattice For arbitrary lattice, then we have the biggest the best asymptotic algorithm is the BKZ algorithm And so what you can have is two to the n approximation factor in polynomial time or polynomial Approximation factor in two to the n time and all the trade-offs in between So now the problem if you use SVP and CVP is usually the schemes that are built on these problems are not very efficient in particular because you have to store a matrix and things like that and so in order to improve efficiency We can use structured lattices and so an example of such structured lattices are ideal lattices So I'm going to define More precisely what's there in a few slides But let's think for the moment as of ideal lattices as Structured lattices and so now the question is if we want to do cryptography on ideal lattices Then is the problem of finding short vectors still hard when we restrict ourselves to ideal lattices So this question was partially answered by Kramer du Kavezolowski at your clip 2017 And so they showed that in a quantum setting we can do better than to be KZ algorithm We can find a two to the square root n approximation factor of the smallest non-zero vector of the lattice in class in polynomial quantum time So this is something we don't know how to do for general lattices So the algorithm is heuristics and it works for prior power cyclotomic fields and so what I'm going to present in this talk is An extension of the CDW algorithm where so we achieve all the trade-offs in the quantum sitting So there is not just a drop at two to the square root n approximation factor And we also have some improvement in the classical sitting. So again the algorithm is heuristic so here is a in the picture sorry for For site prime promote cyclotomic fields, but it works for any number field and it's slightly different And so the main difference is that we require a preprocessing which is exponentially large So we require to first pre-compute something that depends only on the ring and then this is the query phase of finding short vectors in ideal in this ring Okay, so let me already tell you about the impact of What you should remember of the result. So it's just a theoretical result meaning that Okay, okay, it seems that It might be easier to find short vectors in ideal lattices than in arbitrary lattices Because even I mean even if with this preprocessing, we don't know how to do better than the BKC algorithm in 14-year-old lattices, but for ideal lattices we we have some Improvement, but it's absolutely not a practical algorithm. So there are two reasons for which it I mean It's cannot be used to impact any concrete schemes the first one is the exponential preprocessing which even if you have to do it only once it's still exponential and The second reason is that usually schemes are not So they are shown to be harder to break than breaking the ring LWU problem Which is itself harder to solve than finding short vectors in ideal lattices, but we don't have the reverse direction So it could be that it's easy to solve to find short vectors in ideal lattices But the ring LWU problem could still be hard to solve So even if we had a classical polynomial Algorithm for ideal LGP this would impact very few cryptographic schemes Okay, so let me give you some details about the algorithm so I'm first going to define because I'm speaking of ideal lattices, so I'm first going to define what it is and Then I will go to the details Okay, so for simplicity in this talk. I'm going to restrict myself to a ring which is Cyclotomic ring of power of two Power of two cyclotomic ring. So the effects mod x to the n plus one and So I need to algebraic definition before going further So units in the ring are elements that can be multiplied by another element of the ring so that the product is one So if you think of z the simplest ring, maybe so you have two units one and minus one So I'm also going to present to the algorithm only for principle ideal So in the talk when I say ideal I'm just meaning principle ideal and this is a set of all multiples of some element in the ring So again in the if you look at the set of all even numbers, that's an ideal generated by two and We know that the generators of an ideal They are exactly the multiples of one of the generator by all the units So in the the generators of the set of even numbers are two and minus two and that's all So that's all we are going to need for algebraic definition So let me tell you so I've defined what is an ideal and What does it mean finding short vectors in an ideal? It means so we're going to see the ideal as a lattice So it's done for those of you who know using Minkovsky embedding But for simplicity here, I'm going to present it with a canonical embedding the sorry coefficient embedding so The ring are we can see it as the set of polynomial with integer coefficient of degree n minus one And so we can see this polynomial of degree n minus one as a vector of dimension n with integer coefficient And so this gives us a mapping an isomorphism between the ring and the end and the end is a lattice I mean, it's a simple lattice So here's an example in dimension two. So our is the two And now an ideal is a subset of the ring which is stable by addition and subtraction So it's sub lattice of the lattice are and so we can We can find a basis of the ideal and we can see it as a lattice So now I can give you the precise definition of what I want to do so I'm given a basis of an ideal when I see it as a lattice and I have some parameter alpha between zero and one and what I want to do is to find an element in my ideal Which is done zero and with a gradient norm is at most two to the n to the alpha times larger than the smallest element of the ideal And so again, we want to do better than the BKG algorithm. We want to use the structure of the Lattice to do better Okay, so let me first present you the CDPR algorithm by Kramer du Cap I cut regf So I've told you at the beginning that CDW algorithm is This two to the square root n Approximation factor in quantum polynomial time. So here because I'm focusing only on Principal ideals. This is the CDPR algorithm CDW is just an extension to any ideals Okay, so I'm going to use a very useful tool Describing this algorithm and the next one, which is the log function So log with a capital L is a function that goes from my ring to the real to the n and you can think of it as just taking the logarithm of each coordinate of my vector and So I'm also going to define one. This is a vector with only one coordinate and h is the hyperplane or so good I'll do one. So I'm going to give you Some properties about this log. You will have to trust me some are more Natural than others So whenever I take an element in my ring and I consider it's like I'm going to decompose it as some vector Colinear to one and some vector in the space capital H So now if I'm looking at the element the logarithm of the element of my ring, they are all they all are on Half space so here on the top right part of the line So this is something. I mean This is a property first property The second property is the elements that are on H. We know them that's a unit so all the units are on H and All the element of on H are units and It's not just a random set. It's a lattice. So which is called the log unit that is so when I take the logarithm of the unit I get a lattice on H in H Okay, some other property which may be more natural is the logarithm is additive so the go them of a product is a sum of the log and So because I'm interested in short vectors in the real space and not in the lock space So I need the link between the two and again, it's still it's natural So the Euclidean norm of my vector in the real space is roughly 2 to the infinity norm in the lock space of the log Okay So now I'm going to use this logarithm function to describe the CDPR algorithm So first observation is so let's look at log of the elements of my ideal So Remember so I have log of G one of the generator and because it's additive and all the points are on the right of H on some Halt space then they're all above the line log G plus capital H and The elements that are on the line log G plus capital H. They're exactly Log G plus log of a unit So log G times the unit which is log of a generator of the ideal. So on my line Log and sorry on the line H plus log G. I get all the logarithm of the unit of the generator story of the ideal and so Okay, so I have the point of The logarithm of the ideal and now I want I would like to find the smallest one So the smallest I can do is this star Which is at the intersection between log G plus H and the one axis It's the smallest point that is on the right of log G plus H And so this is the one I would like to to reach So maybe this one is not in the ideal, but I want to find the point of the ideal which is the closest to the star and So a good candidate for this is a generator because the generators are already on the good line And we just want to find the one which is the closest possible to the one axis so that's How the CDP algorithm work? So first thing is we are going to compute a generator and we want to then find the best generator So, okay, let's first start by computing a generator. This is previous work This can be done in quantum polynomial time or two to the square root n classical time so we get a generator which is Which will be usually large and now we want we would like to find the generator Which is the closest possible to the one axis and so how do we do that? We're going to project log of G1 on H and then we will solve a closest vector problem With the lattice lambda so finding the closest unit and then we just remove it from log G1 And we get the best generator for the ideal So now we have two questions here The first one is I've told you that finding closest solving closest vector problem is a hard problem So how do we do that? And the second one is how good is the approximation factor I get? So the answer to both question was given so in the CDP algorithm And what they did is they exhibited a good basis of the log unit lattice. So the basis the exhibit is somehow orthogonal So solving the CVP problem in an orthogonal basis is just rounding each each coordinate so this can be done in polynomial time and They also showed that the distance between two consecutive right points is roughly square root n so the distance between My best generator and the star will be at most square root n and because I mean the log space It means that when I go back to the real space, I get an approximation factor 2 to the square root n And so that's how we can obtain a 2 to the square root n approximation factor in polynomial quantum time So now let me explain to you what we did in order to try to obtain all the trade-offs between as this All the trade-offs. So the first thing to observe is usually When we take the log of a generator at random, it will fall between I mean a distance square root n over 2 of any good any units so Most of the time the best generator will be 2 to the square root n over 2 approximation factor of the smallest element of the ideal and so We cannot do better than 2 to the square root n Over 2 approximation factor by just looking at generators of the ideal But if we look at all the points in the ideal one of them should be close to the star So this is something we know. It's just that it's usually is not a generator So what we want is to find this point which is close to the star So how we do that again, we start the same way we compute a generator and we project it on H and So here bad situation we're in the middle of two red points. So here the problem is that the log unit lattice is not dense enough So we have points of the space which are a distance square root n of every point of the log unit lattice So what we do is we increase the density of the log unit lattice By taking elements of small algebraic norm So that means they have a small coordinate on the one axis and we're going to project them on H And so when we get a new point we can also add the log of the unit to this point and so we get a Shifted log unit lattice and we can do that again and again until we have created sufficiently many points such that The density of the lattice is such that for any target, I know that I will have a point at distance at most constant from my target And so that's that's what we do. So now I'm calling L this new lattice So I'm lying. It's not a lattice there because it's just a union of shifted lattices So how do we make it a lattice? This is the lattice we're going to consider. We consider So it's in columns. So the first vector are the units And then we have the projection of the logarithm of all the elements with small algebraic norm And to make it a lattice, we add a block with one on the diagonal below the elements of small algebraic norm And the target we choose so the point which we will target in this lattice is the point with zero on the bottom part and the projection of the logarithm of the generator on the top part and so this and Okay, the ones on the diagonal they allow us to control the number of time We are going to add an element of small algebraic norm and we don't want to add them too many times because each time we add this element We are going to increase the component on the one axis So if we add too many of them, then we will go far from the start. So this I mean this solves the two problem not being a lattice and being able to control the algebraic norm of the solution and So here we need a heuristic assumption which is that we are so we assume that if we take enough Element of small algebraic norm, so roughly n log n quasi-linear in n Then we assume that the covering radius of this lattice is constant meaning that for any target point I can find the point of this lattice at distance at constant distance from my target So this This is essentially the algorithm. I'm considering I'm just serving CVP in this lattice for this target vector and Now the main question is so remember in CDPR It was easy to solve the closed-out vector problem because they had an orthogonal basis and with the new lattice L I've just shown to you. We don't know of any good basis. So CVP is hard to solve But the important observation is that the lattice only depends on the ring and not on the ideal So it's just unit and elements of the ring of small algebraic norm So we can do some pre-computation on it And so we're going to use some CVP algorithm. So closed-out vector problem with preprocessing So in the article we are using the one from La Roven at SACS 2016 But since since then new algorithm have been proposed But asymptotically they are all the same so they enable us to so modulo 2 to the n Pre-computation we can then find an element at distance n to the alpha of the target in time 2 to the n 2 to the 1 minus 2 alpha And so if I summarize if I put everything together So I obtain in the log space n to the alpha approximation factor so in the real space It's 2 to the n to the alpha The query time is 2 to the n to the 1 minus 2 alpha. So this is for solving the CVP problem Plus the time needed to compute a generator so either polynomial time in quantum or 2 to the square root n in classical setting and There is a preprocessing for the CVP algorithm and so we obtain this figure and So the plateau in the classical Setting is because of the computation of the generator, which is 2 to the square root n. So let me just conclude with The extension So what I've presented to you was in the case of power of two psychotomic fields and for principle ideals But we can generalize that to none Principle ideals to any ideals and also to all number fields So the pictures change slightly so on the left you have the case where the discriminant is quite linear in the degree So it's almost the same picture except that's In the classical setting the plateau is a bit above because the time required to compute a generator in the in any number field is two to the n to the two-third and When the discriminant starts being increasing being more than quasi-linear in the degree of the field then the Trade-off degrades and at some point. It's no better than the BK algorithm And that's all for me. Thank you So any questions So you start from the original lattice and you build this new this new lattice where you will find the the element But so but in between you had this collection of lattices So couldn't you why why did you need to construct the target lattice and not like look for the element in that collection of lattice? Yes Yes, yeah, for example So in fact, so here the points I've drawn it's not a lattice because it's a union of lattices So the way we make it a lattice is by yes But so that's what so there is not a lattice, but your element that you're looking for it's still there It's in one of those lattices no Yep Yeah, so we yeah, so we are adding I mean So you're technically you don't it's not okay, so I'm I've been lying here It's not a union of lattices because you also can add R1 and R2. I mean, it's One of the combination of all these points so you can do linear combination of these points But it's not just not too many of them So you have some limit, but it's not in this union that you will find your point It's usually it will be in the linear combination of the points that are drawn there and so if you want to consider everything it will becoming exponentially many lattices and Why that so you said that for to for this album to work you need the ideal lightest is so it doesn't work on generic lattices That's what you said at the beginning. Yeah, so here. We are explicitly using the algebraic structure. We are using the fact that it's an idea Just to computer generator of it for instance So you cannot like just from a normal lattice you cannot associate No, no, I mean an arbitrary that is does not correspond to an ideal that is is necessary now Hello so I Would like to know if you have some idea about how sensible to this realistic is your algorithm I mean, maybe there is it the property that you are requiring is not true for all T Just maybe for some T's and then would the algorithm is still work in this case so Yeah, so it I mean What we expect is that the top part of T is Somehow uniformly distributed so it should work. I mean, maybe you if it works for T's with zero coordinates on the bottom part, but uniformly for any top part It would be sufficient, but this is a simplification in the case of Principle ideals in the case of general Sorry of any ideal. We also have coefficient on the bottom part. So we essentially need that Any uniformly chosen target is close to a point of the lattice so Maybe So we can rerun the mice. So if with probability one over two we are at constant distance It's it would be enough for us. Maybe any more questions So, let's thank our speaker again