 Thank you for the introduction. I'm going to talk about short vectors in ideal lattices. Ideal lattices are a particular kind of Euclidean lattices. Computational problems in Euclidean lattices provide a strong foundation for post-quantum cryptography and essentially you want to build cryptographic schemes to security rely more or less on the problem of finding short vectors in such Euclidean lattices. How short? Well, how short you want to find vectors will determine the difficulty of the problem. On any lattice you have some shortest vectors these are the hardest to find if you're interested in finding the shortest vectors of your lattice. You can do it thanks to the algorithm BKZ but it will take you exponential time in the dimension of the lattice is the point here on this graph. But you can make the problem simpler by asking not to find the shortest vectors of your lattice but some approximate shortest vectors that are not the shortest but the shortest up to some approximation factor and when the approximation factor is large the problem becomes simpler in particular if you're interested in approximation factors that are exponential in the dimension then you can do this in time polynomial in the dimension thanks to the algorithm LLL it's the point at the bottom of this graph. Now I mentioned ideal lattices. Ideal lattices are a particular case of Euclidean lattices. The problem with generic Euclidean lattices is that when you try to design cryptographic schemes based on problems in them you realize very quickly that you're going to end up with a lot of memory and bandwidth requirements so you can use some lighter alternative lattices with more structures that allow you to build faster algorithms faster protocols with lighter in memory and bandwidth and the typical example of this are cyclotomic ideal lattices. So what are cyclotomic ideal lattices? For the rest of the talk I'm going to fix an integer m together with a primitive mth root of unity omega so omega is a complex number which raised to the power m is 1. I'm defining the field k to be q a joint omega it's the cyclotomic field of conductor m it is a number field of degree phi of m where phi of is an Eulostation function. This field k contains a subring z brackets omega which I write O which is the ring of integers of this field. So where are our lattices? Well k is a number field of degree phi of m so it embeds into the real vector space of dimension phi of m through what we call Minkowski's embedding and through this embedding of the field into a vector space the subring the ring of integers becomes a lattice so O can be seen as a lattice in this real vector space of dimension phi of m. So here we have our lattice but in fact we have many more lattices because any ideal in this ring O is also a lattice in that vector space and these are what we call ideal lattices. So they come with all kinds of additional algebraic structures compared to generic lattices so they allow to build very interesting protocols but of course you have to wonder does the problem of finding short vectors become simpler and yes it does become simpler it has been proven that you can find shorter vectors more efficiently at least if you have a quantum computer. This is the result of the long series of works culminating in Kramer et al 2017 where it was shown then that in ideal lattices you can find in quantum polynomial time a sub-exponential approximation of the shortest vector so you can cut the graph in the middle and here it's quantum polynomial time. Okay so the object of the paper I'm presenting today is understanding precisely when do these quantum algorithms start outperforming LLL and BKZ because these are all asymptotics it doesn't tell much about what's happening with concrete parameters. So how do these algorithms work? They come essentially in two parts the first part deals with a particular family of ideals called principal ideals. A principal ideal is an ideal that has a generator meaning that A is principle if it has a generator G which is an element so just that A is just G times the ring O. G is called the generator and instead of looking at the problem of finding short elements in A you can look at the problem of finding short generators of A so it sounds like you're making the problem more difficult because now instead of finding a short arbitrary thing you're asking for a short thing that is also a generator but it's making the problem somehow simpler by highlighting the relevant algebraic structure so this is the approach taken in this series of works starting by with a Campbell et al in 2014. So how do you do this? You look at the following problem you're given a principle ideal A and you look for a short generator of A. How short? Well this short you want it to have you know this quantity where on the left here you have essentially the length of the shortest vector of the lattice and what remains of course is the approximation factor which in our case is a sub exponential quantity in the dimension so you want to find something of that length. How are you going to do this? It's a two steps process you're given a principle ideal A you know it has a generator but you're not given a generator so the first step is to find one. So find an arbitrary generator this is already a non-trivial problem but it can be solved in quantum polynomial time it's a result of Besson's song in 2016 but this algorithm that finds a generator finds a generator that is usually extremely large so the second step consists in finding a shorter one. Why would you find an arbitrary one in the first place? Well if you have a generator you're in a better position because now you have a search space. You're given an arbitrary generator G of A so you know that the set of all generators is G times O star where O star is the subgroup of the group of units of your rank. Okay so you have a short a search space you're looking for a short element in this set here. So the way you're going to try to find a short element in there is by transforming this problem into a lattice problem through what we call the logarithmic embedding and this is what has been done in Kramer et al 2016. The logarithmic embedding so here's the definition I'm not going to go through it but what you have to understand is simply that it behaves as you would expect from a logarithmic mapping that it transforms multiplicative structure into additive structure. It takes the multiplicative structure of K star and transforms it into the additive structure of this vector space. In particular it transforms the group of units O star into a lattice which we call the logarithmic unit lattice written log of O star. So if we apply this logarithmic map to our problem what do we get? Remember we're looking for a short generator of G times O. We know that the set of all generators is G times O star. If we apply the logarithmic embedding map you are looking for a short element in a translated lattice because the logarithm of G times O star is the logarithm of this element plus this log unit lattice. So now you're looking for a short element in a translated lattice. That can be done by solving an instance of the closest vector problem with respect to this lattice log of O star. So we reduced our problem to a closest vector problem. It's not clear how that helps because the closest vector problem is also a difficult problem unless you know a lot about your lattice. And here we do know a lot about this lattice. The logarithmic unit lattice has been studied very thoroughly and we know in particular a full-rank set of short vectors in it which can be used to solve the closest vector problem with sufficient approximation factor. And these vectors come from what we call cyclotomic units. So to understand how short is the generator we find we need to understand how precisely we can actually solve this instance of the closest vector problem. Okay so this is for the part dealing with ideals that are principle so that have a generator. What about the general case? Now you're supposed you're given an arbitrary ideal so it's not principle it doesn't have a generator you cannot be looking for a short generator. We're going to find a short vector in this ideal. How short? Well the same quantity with here the length of the shortest vector and here the approximation factor we're trying to reach. And to do this the idea is to reduce again to the principle case and so what is done in Kramer et al 2017 is you find a small ideal B such that the product AB is principle. So since B is small this product AB is in some sense close to A. It's A times something small and it's principle so we call AB a close principle multiple. Once you have that you use the previous part of what I presented and you find a short generator in this principle ideal AB and this short generator is also a short vector in A. Okay so how do we find these principle these are close principle multiples. First you have to transform again your problem into a lattice problem. I'm not going to be very precise here but just to give you a rough idea of what are the main steps we first need to solve an instance of the discrete logarithm problem in the class group of our ring in order to represent our ideal A as a point in some lattice L. The lattice L represents the points on the lattice represent some ideals and we want to find to which point of this lattice A corresponds. You do this with this with logarithm computation. Once you've done that you can look at the sub lattice of L which corresponds to principle ideals. So P is a sub lattice of L and it includes principle ideals and since we are looking for close principle ideal to A it's only natural to try to solve the closest vector problem with respect to this point A as a vector of the lattice L and the sub lattice of principle ideals. So you're again reduced to solving an instance of the closest vector problem. You're going to find a vector in P so a principle ideal that is close to A. And again you have to wonder can we solve this closest vector problem? Again it's supposed to be difficult if you don't know enough about your lattice. Well it's been shown that you can you can solve the CVP using the Stichl-Berger theorem. A pretty old theorem that allows you to build a good basis of the so-called Stichl-Berger lattice which is a sub lattice of this lattice P encoding principle ideals. And this basis is good enough to allow to solve the closest vector problem with good enough approximation factor and this is the result of current out in 2017. And again to understand how short are the vectors that we find we need to understand how well we can solve this instance of the closest vector problem. So yeah how short are the vectors that we find? I already told you right I mean we find vectors of this Euclidean norm where here is the shortest vector that could be found and here is the approximation factor that we're trying to reach. But this is not extremely satisfying and the reason is simply this big O here. It doesn't with this big O here we don't know much about what's actually happening with concrete parameters. Worse than that the big O is in the exponent so it has an enormous impact. So we're trying to understand what are the hidden constants in there and try to derive from that when these algorithms start out performing the classic methods LLL and BKZ. So we can try to do that by simulating the algorithm. We run the algorithm and we see how what's the quality of the output and we compare that to what could have been done with a LLL or BKZ. The problem is that these algorithms are quantum so we cannot actually run them. So we need to find a way to simulate them without having a quantum computer. So let's let's identify the quantum steps. So here's a summary of the algorithm as I already told you we start by solving an instance of this discrete algorithm problem in the class group. This should already tell you that you need a quantum computer to do that. The class group is a pretty big group. We don't know much about it even computing its structure requires a quantum computer if you want to work in polynomial time. Once you've done that you need to solve an instance of the closest vector problem in the Stickel-Berger lattice to find a small ideal B such that AB is principal. So here it can be done classically. It's an instance of CVP and this step is essentially the closest principal problem. Once you've done that the third step is to find an arbitrary generator of this principal ideal AB. This again you need a quantum computer. It might not be as obvious as when you see this with logarithm but trust me to find an arbitrary generator if you want to do it in polynomial time you will need your quantum computer. Then finally to find a short generator so G is an arbitrary generator and to find a short one you need to solve a closest vector problem this time with respect to the logarithmic unit lattice and then H is your output. So we have these two annoying quantum steps and then to understand how short the output is you need to understand first how small is this ideal B that you find in step two and how short is the generator H that you find in step four. So how do we get rid of these quantum steps? Well we're going to do something very trivial. We just assume that their output is uniformly distributed. Now that sounds like a really strong assumption but this can be made rigorous actually by just re-randomizing their outputs. Every time you have a quantum step you take its output you re-randomize it to make sure it's uniform. You solve the classical step and then you de-randomize to make sure that you got the correct result and then we can always assume that the classical steps have an input that is uniformly distributed. This can be done rigorously more or less by studying random walks in the class group things like this. Okay so we can remove these quantum steps and replace them by random oracles that just give you random outputs and then see what it what's happening with the closest vector steps. So we have two CVP instances in step two with respect to the Stigler-Berger lattice and in step four with respect to the logarithmic unit lattice. These lattices we know explicit short bases for them so we can actually experiment with them we can run numerical simulations and we can prove theoretical lower bounds. So here are our results. On the horizontal axis you have the dimension of the lattice that you're working on and on the vertical axis you have a measure of the quality of the output. It's the root hermit factor it's not exactly the approximation factor it is essentially the approximation factor raised to the power one over the dimension of the lattice. Choosing this root hermit factor allows us to have horizontal lines for the classic algorithms. So here this line for instance written LLL says that in whatever dimension you achieve a root hermit factor of about 1.022. It doesn't mean that LLL will give you as good of an approximation in any dimension because remember that this quantity is raised to the power one over the dimension. Okay then all the horizontal lines are be case for different block lengths so the bigger the block length the cost the algorithm but the better the quality of the output. In blue here you have the results of simulations for a plain implementation of the quantum algorithms. So naive implementation meaning implemented just as described in the articles the original articles without trying to do any kinds of optimizations and the red lines are the same algorithms but with a number of heuristic improvements for instance instead of just using a short basis of the basis of the lattices we are looking at we exploit the fact that we don't only know a basis but we know a very large set of short vectors. The number of short vectors we know in the lattices that we're working with is much larger larger than the dimension and we can exploit that using appropriate CVP algorithms and this is a very worthwhile improvement because as you can see the red curve is way better than the blue curve. Now here the brown curve is a theoretical limit a theoretical lower bound on what could be achieved by this family of algorithm assuming that you have a perfect CVP solver. So these blue and red are actually numerical simulations using state-of-the-art algorithms for solving CVP and the brown line assumes that you have a perfect solver so it shouldn't be a possible to go below these brown lines if we stay within the same realm of algorithms. So the interesting points in this graph are the crossover points with the classical algorithms for instance if you look at BKZ 120 and 160 which are essentially the limit of what's achievable today 120 is feasible in a few days 160 is it's probably feasible if you're extremely rich and you have a little bit of time. What we can see is that our heuristically improved algorithm will not give better results than these until dimension about 6,000. For context I think the largest dimension of a psychotomic ideal lattice that appears in the NIST competition is about 1,000 so this is an order of magnitude bigger and then if you're worried about what could be possible in the future if we have very good CVP solvers you can look at the brown line and you can see that it won't cross BKZ 300 until also dimension about 6,000 because the 300 is essentially the first security level in the NIST competition. Okay so these are the references I used. We have a few minutes for questions. Okay I'll ask one. Can you comment on the difference between the actual instances of ideal SVP that you were studying here and the ones that and the kinds of problems that appear in cryptographic systems? What are the differences? Yeah. So in the numerical simulations that I showed here one difference is that we are looking at psychotomic fields of prime conductor whereas usually you would use psychotomic fields of power of a power of two conductor. The reason we do that is that we still study the case of power of two but it's not this graph and the reason is that powers of two are way too sparse to get good extrapolations of what you get you get here. What are the differences? I don't know. The dimension is obviously way bigger than what you would actually find in practice in this tail at least. Well my question is are ideal lattices in psychotomic number fields actually appearing in proposed cryptosystems or which ones? Yeah breaking ideal SVP wouldn't actually break the cryptosystems for many reasons. First most cryptosystems actually rely on the problem on problems that are supposedly stronger like ring LWUE or whatever and this is just the if you can solve this it doesn't mean you can break the cryptosystem because the reduction is only in one direction. It does that answer your question? Yeah. Any other questions? Okay let's thank Benjamin again.