 Hello everyone, I'm Weiqiang, and in this talk, I'm going to present a faster enumeration-based lattice reduction, which can be used to reach a low-time effector k to the 1 or 2k in time k to the k over a plus a small order of k. This is a draw-all with Martin Albrecht, Shibai, PLM Fugler, Pocke-Snatt, and Demonstrate. In more details, we give a new enumeration-based lattice reduction, and compared to the Pyro ones of the same type, we can reach the same quality while with a smaller-time complexity than before. And when the input lattice has a large enough dimension, we can prove this improvement under a heuristic assumption. And when the dimension is small, we can still sort in our estimation that this improvement still works for varying algorithm. I will give more details about this later. And to continue, I will need to first introduce some necessary background about this lattice. So lattice is a set of regularly distributed points, as you can see the blue points here, which can be generated with a set of linearly independent vectors like B1 and B2 here. And this is also known as a basis of the lattice, and the same lattice can have arbitrarily main difference basis, like C1, C2 here, from another basis for the same lattice. And there are also two important invariance in lattices like the first minimama, which is used to denote the norm of the 390 vector in the lattice, and also the volume of the lattice, which can become 50 as the determinant of any one basis of the lattice. One of the most interesting and important problems defined in lattices is known as the shortest vector problem. So we are given a basis of the lattice, and you are asked to find a saw vector with norm equal to the first minimama of the lattice. And in this one, we consider a variant of this problem, which is called approximate Hermit SVP. So again, you are given a basis, but now you are asked to find a saw vector with norm upper bound by the normalized volume of the lattice up to some vector gamma. And by Minkowski upper bound, we know that the first minimama can be upper bound by the normalized volume of the lattice up to a vector square root n. So whenever you have size n SVP saw, correspondingly, you have same size approximate Hermit SVP saw with approximation vector square root n. And to solve this problem, the best known solution is to reduce the given basis to obtain a good one. And to quantify the quality of the basis, one can use the so-called Hermit factor, which can be computed as the norm of the shortest basis vector, normally the first one, divided by the normalized volume of the lattice. And once you can reduce the basis between a smaller Hermit factor, then you can use it to solve the approximate Hermit SVP problem with a smaller approximation factor. In practice, the best known algorithm for reducing the basis is known as the BKZ lattice reduction. And to quantify how good a lattice reduction algorithm is, one can use the so-called root Hermit factor. It is a normalized version of the Hermit factor normalized by the dimension of the lattice. And this also introduced a quantity which is independent from the dimension of the lattice. And if you are further interested in the concrete effect of a lattice reduction, you can further look into the grand-smith orthogonalization of the basis. So intuitively, lattice reduction helps to reduce the basis, such that for the resisting basis, it has the grand-smith vectors like B1 star and B2 star here, with norms closer to each other than before, so then the norms of C1 star and C2 star. And next, I'll write to first recall the BKZ lattice reduction algorithm, and then see how it differs from our new algorithm. And to continue, I'll need to introduce another notation called orthogonal projection. So here, for example, I will use B3 to D2 to denote B3 after removing the projection over the space span by the first two basis vectors. So now here comes the BKZ algorithm. The input in Kruller basis B denoted from B1 to BNR, and also cos prime dot K denoting how strong the lattice reduction is going to be. And the BKZ algorithm starts by running an SVP over the first block from B1 to BK, and then use the found solids vector in the first block to update the first basis vector and then move to the second block. And again, use the found solids vector in the second block to update the second basis vector and then move to the next block and so on and so forth by saying the second block, I mean the second block after removing the projection over the first basis vector. So the BKZ algorithm run SVP from first block to the last block and also repeat this process of suffixion to remain in time until for each block, the first vector reach the first minima of the corresponding block. And to evaluate the cost of the BKZ algorithm, one can instead look at the cost of the underlying SVP solver, which is the most costly step inside BKZ. And the two most practical SVP solver families are implemented by sieve and enumeration respectively, and the ones implemented by sieve take an exponentially larger space, while for enumeration it's just a small polynomial space. But for enumeration, the running time is super exponential, while for sieve it's just an exponential, even though the dominating constant as you can see here for enumeration is smaller than the one for sieve, but the locate factor will anyway eliminate this one teacher, once case is large enough. And in this world we focus on enumeration based SVP solver, so we are in the region of using a small polynomial spacer, but super exponentially larger running time. And in our simulation, we always refer to the Yidrin Pooning by Gamma and Guyan and Regate 2010 for an efficient implementation of the enumeration. And for comparison between our results and the prior ones including like BKZ or SDBKZ. So SDBKZ is a variant of BKZ, you can very roughly like to run BKZ not only over the given basis, but also over the dual basis of the given basis. So for BKZ or SDBKZ relying on the enumeration-based SVP solver, to reach a quality like Luham factor equal to K to Rihwan or 2K, the time complexity is dominated by the underlying SVP solver, which is given as K to the K over 2E plus a small over K, as mentioned before. And in this work for reaching the same quality, so the same Luham factor, we achieve a smaller time complexity, which is K to the K over 8 plus a small over K. Before I go into more details of our new solution, I write to first review both quality and time complexity in more detail about the prior ones and then step by step to see how our new algorithm differs from the prior ones. First, let's review the quality of the prior ones. So here suppose this is the grand-smick log norm of the reduced basis after running SDBKZ with the size K's SVP solver. In this case, this quantity.i denoting the proportion between two successive grand-smick norms is well studied by the paper of Missensio and Water in 2016, which tells that this quantity, data i, is fixed for different index i outside this last block, so it introduces a straight line here outside the last block. And in this case, I will also call this data i as the slope of this line. And in this work, we consider a slightly different variant of this result. So we replace the underlying SVP solver with the same size, but approximate Hermit SVP solver. Again, if you want SDBKZ with the approximate Hermit SVP solver and again with size K, then you will again obtain a straight line here outside the last block. And we also know the slope of this of this line in this case is equal to the square of the root time effector achieved by the given approximate Hermit SVP solver. But this is not the case for BKZ. So for BKZ, this quantity, data i, is not fixed for a different index i, so it does not give a line. And we refer the audience to the appendix of our paper in ePrint for more details for the case of BKZ. And also because of this good property achieved by SDBKZ, we choose it as a slope routine in our new algorithm for reaching our new result. But it does not mean that it is impossible to use BKZ for reaching the same result as what we can reach using SDBKZ here. And we leave it as a future work. And next let's move to the cost. So let's focus on the last block. To ensure the first place of the last block reached the first minimum of this block, one need to run an integration over this full block. And this is known to be realized by the Kanan's algorithm. So one need to first reduce this block so that, except for the first place, each place from second place to the end reach the first minimum of the corresponding block from this place to the end. And the last step is full-immersional over this full block. And the time complexity is well analyzed by our host delay in 2017. It's given as k2k over 2e plus a small order of k. And this is also known as the worst-case immigration cost over size-case lattice. But this worst-case cost is not the case for each block. For example, for the first block, over a straight line, the immigration cost is only k2k over 8 plus a small order of k. And this difference also introduced a conjecture for a long time that it seems possible to achieve the same quality, like the one achieved by SDBKZ, but with reduced overall time complexity, like k2k over 8 instead of k2k over 2e. And in this way, we give a positive answer to this conjecture. And before I continue, I'd like to make one more remark over this last block. So in the following, I will always assume for this last block, each place reach the first minimum of the corresponding block from this place to the end. And this corresponds to the so-called HKZ reduced basis. So this is why also why I will also call the corresponding block here, the curve here as the HKZ curve. And there are two remarks. First, this is not promised by the SDBKZ reduction. And second, this additional assumption also does not introduce a larger overall time complexity. So now let's move to our solution. So how can we do better? So we know that the main obstacle is from the immigration cost over this last block of size k, which take k2k over 2e. So instead of run SDBKZ with the size ks SVP solar, now we train to run the SDBKZ with the reduced size SVP solar. So that the immigration cost over this reduced size is well controlled reading this aesthetic cost. But this also introduce a straight line outside the last block with a larger slope such that for the first basis vector node, it has a larger norm than before. So it achieves a worse home effect and look home effect. So we have to continue to reduce this basis such that for the resisting the first basis vector, we achieve the same norm as before. So achieve the same home effect and look home effect. So this is our target. In more details as we promised, now we train to run SDBKZ with the reduced size SVP solar. And here I'm going to denote the reduced size by k0. So equal k times 2e over 8 approximately equal to 0.67 times k such that the worst case immigration cost over this reduced size can well control reading this aesthetic cost. And as you can see here, and also you will see in the following, different block size like k0 will get involved in our new algorithm. And k here we only serve as a cost parameter instead of a block size. And again by Minkowski-Uppermount, we know that k0 dimensional SVP solar implies the same size approximate Hemi-SVP solar with approximation factor square root k0. And this implies actually for a k0 dimensional attest, we can already reach a home effect square root k0 and also the corresponding root home effect here. And together with the relation between k0 and k, we can derive our starting root home effector, so k to the power of 1.36 times k. And this is for sure much larger than our targeting root home effector. So now we already have a starting approximate Hemi-SVP solar with reaching this starting root home effector in time k to the power of 8. And next we aim to construct from this starting one, construct a new one, approximate Hemi-SVP solar with a new approximation factor over a new dimensional, still in time k to the power of 8, while reaching a smaller root home effector. If we can achieve this, then we can repeat such processes sufficiently many times until we reach this target root home effector. So this is the general idea of our new algorithm. And in the following slide, I'm going to detail the first step. So here for example, I'm going to take k equal 1000. So we can start with the SVP over a reduced size around 670. And we can track the worst case immigration cost over this reduced size, as we are reading this is a expected cost. And also make a note on the starting root home effector. Now here comes the first step. We are going to fit the sdbkz oracle with our starting approximate Hemi-SVP solar. And this helps to reduce a basis of larger dimensional. And for the corresponding grand-smick log norm, it has one straight light outside the last block with slope equal to the square of the root home effector achieved by the given approximate Hemi-SVP solar. And knowing the slope and also the restriction of the immigration over this regional, we can note the largest size that we can iterate over this straight line over this green region. And together with the starting dimension k0, we can know the next dimension k1. And again by Minkowski upper bound, we can know the upper bound of the norm of the solution returned by the immigration over this straight line. And together with the waller of the whole it is, we can compute the new home effector and the new root home effector. And as you can see here, it's getting smaller. And again with this new approximate Hemi-SVP solar, reaching a smaller root home effector, we can feed it to the SDB KJ oracle. And this can help to reduce a basis of even larger dimensional. And for the corresponding grand-smick log norm, you can see a new segment of lighter with the even smaller slope. And this even smaller slope can help to introduce an even smaller root home effector. And once you can repeat this process sufficiently many times, then we can eventually approach our targeting a root home effector. So this is our general idea in more details. And next, I would like to first give an intuition about the overall cost of our new algorithm. So here first we know the immigration cost at each iteration is well bounded by k2k over 8 as we decide. And once we only need to run a logarithmic number iteration for approaching our target root home effector, then the overall time complexity will be k2k over 8 plus a small order of k. And actually we can prove this is indeed the case. And next, if we look closer into the grand-smick log norm of the reduced basis by our new algorithm, we can see there are different segments of lighter with the different slope here. And this is different from the one generated by SDKZ, which only get one straight line with one global slope. And together with the picture below, so as the algorithm procedure you can see the slope is getting smaller and smaller. Actually this matches our intuition about the reduction, that is reduction, which helps to reduce the basis such that the grand-smick norms are getting closer and closer to each other. That's why the proportion between two successive grand-smick norms is getting smaller and smaller. And as the algorithm procedure will eventually reach one segment of line with slope approaching the one generated by SDKZ. And this slope also helps to reach the root home effector, the expected root home effector. And last I would like to make one more remark about the root home effector after 10 times our iteration. So 10 here equal to the logarithm of the cos lambda k k equal to 1000. So after the logarithmic number of iteration, the root home effector already converges very well. So this matches our intuition that we only need the logarithmic number of iteration for approaching our targeting root home effector. And next I would like to give the full description of our new algorithm, which we call fast-in-nume algorithm aimed to conjure an approximate Hermit-SVP solver with approximation factor gamma i. i is the number of iteration level. Other input include cos parameter k and also basis b. So once if you are in the first iteration or you just run the worst-case iteration over this reduced size, we cost within this expected cost. And if you are not in the first iteration, then process with SDKZ with the approximate Hermit-SVP solver from last iteration, and then run an iteration over this region corresponding to the first segment of line. And for the analysis, we need a heuristic assumption which needs to assume that each gamma approximate Hermit-SVP solver between an answer with normal is exactly equal to the normalised one of the corresponding block after effector gamma and not smaller. And formally indeed we can prove that the omega-1 iteration will be sufficient for approaching the expected root home effector up to effect 1 plus the smaller one in the small nana in time k to the k of a plus a smaller of k times some polynomial other besides an input basis b. And if you still remember for each iteration, we need to work over a lattice of the dimension roughly size k larger than before. So with omega-1 iteration, that's why we need the overall dimension to be roughly like omega-1 effector larger than k. But this is not satisfiable for an assist for cryptosystem like this candidate, where we require the dimension n to be relatively close to k, like an over k is constant. So instead to run an iteration over this 100% lighter, we propose a new practical variant where the inpatient journal also covered the HKG journal. So for the practical variant, now we inmate also over the combinational of the straight line and path of the HKG curve. So now the question becomes how can we distribute the proportion for the straight line and the proportion for the HKG curve? To answer this question, we try to inmate the costa for different distribution. So for example when c equals 0, it means to inmate over the HKG curve, so this is the worst case costa. So when c equals 1, it corresponds to inmate over this 100% lighter. And as you can see here, when c equals 0.25, we reach the lowest costa, that's why we choose this value for our practical variant. And the second issue is about the terrain block. So for each middle block, you can always be possessed to have this straight line plus HKG curve and then you can inmate over this this pre-sumed strapper. So 25% or straight line and rest the HKG curve to reach the lowest costa. But for the terrain block, you can no more inmate over this pre-sumed strapper. So the inmate cost can be much larger than the expected costa in this case. So we have to reduce the size of this block. And this will certainly introduce the worst quality for this local block. But in the simulation later, as we will see, this modification will not affect the global quality a lot. And these two modifications together introduce our practical variant in the simulation. And we can see the similarity costa of the practical variant fits the 1 over A curve very well. And also you can see there's a desalite even below this 1 over A curve. And this desalite corresponds to the last step iteration cost without considering any pre-processing costa. And it will give more discussion about this later. And in the picture below, you can see the quality. So the root home factor achieved by the practical variant is at least as good as the one achieved by the standard BKZ. To conclude, for reaching the same root home factor, we achieved a smaller time complexity, k to the k o A plus a small dot k, instead of the o 1. And considering the counter-acceleration, we can get a further effect of two improvements. And this improvement is supported by a different evidence for different regional parameters. So when the dimension is large enough, like in our case omega 1, we can prove this improvement under a heuristic assumption. But when the dimension is small, like in our case constant 2, this improvement is supported by our simulation analysis of the practical variant. For visual works, so first it will be interesting to remove the heuristic assumption that used in our analysis. We know that it is possible to follow the works of on-hospitalate 2007 plus either on-hospital study 11 or newer 17. So in their works they managed to remove the heuristic assumption in the analysis of time complexity for BKZ. And second, it will be interesting to see an extension of our works to other lattice reduction algorithm, like to BKZ or slide reduction. For BKZ, as we already mentioned, it does not introduce a straight line outside the last block. So it will introduce an additional complication for the analysis. And it will be more interesting for slide reduction. So the expectation is that reaching the same improvement for the time complexity, so from K2K2E to K2K8, while maintaining the same quality is the one achieved by slide reduction. And further, without preprocessing cost, the last step inration cost can be even below K2K8, so it seems possible to do some trade-off between the preprocessing cost and the last step inration cost so that the overall cost can be below K2K8 while maintaining the same quality as we can achieve for now. Or in other words, using the same cost here while approaching even better quality. And last is about the cryptography relevance of this work. So in this way, we already have some tries for this small nvok region, which corresponds to the KMN assist for cryptosystem like the next candidate. But in our works, we only have a simulation result with our former NNCESA, so it will be very interesting to see a former NNCESA to confirm that we indeed have this improvement from K2K2E to K2K8 for this small nvok regional. And after that, it will be interesting to continue to see the concrete, the new concrete cross-over point between our new algorithm and the state-based voltage reduction like the joystick-based voltage reduction for concrete cryptography parameter in both classical and content setting. And this completes my talk and thank you for your attention.