 So my talk is about the lower one, the lower one of the cost of the external learnings. Okay, so the background motivation is the long-term security for the lattice-based crystal. So as you may know, the NIST post-contern project is going on and they want the scheme with the long-term security for the next 20 years or 30 years after us. So for this purpose, we need to understand the performance of the co-attacking algorithm for these critical systems. So now the majority of candidates are lattice-based, so this is my motivation. So the cost for the attacker is roughly the algorithm efficiency over the computing power and which is upper bound of the ratio at now. And this cost is lower bounded by the limit of these materials. So now a lot of efforts have been made to find the upper bound, for example, the many algorithms and increasing of the computing powers. But on the other hand, a few efforts have been made for establishing the lower bound. So we consider the lower bound. So lower bound is roughly speaking the limitation of this ratio and we consider the limit of the algorithm efficiency. But it's a hard problem to prove the limit of the efficiency of any attacking algorithm for one problem. So also this is very useful, but this is a very, very hard problem. So probably this is similar to the P versus NP problem. But now I think we have to understand the limit of some major algorithms. So for example, the sieve algorithm for the sieve algorithm, so they have a heuristic lower bound in the classical setting and in the quantum setting. But no one said about the elimination algorithm. So we have solved this problem. So this is a topic of this talk. So this is the outline of the theoretical result. We have proved a lower bound for the cost of the pruner lattice elimination. So this lattice elimination which is used to solve the SVP or VDD. And the advantage of our result is so easy to compute. And our lower bound is so reasonably close to the upper bound. So roughly speaking, experimentally this is less than 20% in exponent. And our result is for the classical enum and can be adapted to the quantum elimination algorithm. So these are the advantage of our result. But on the other hand, so now we don't know how to adapt our technique to other algorithm such as discrepancy or sieve algorithms. So this is a major open problem at now. So next is the outline of the application result. So these glasses show the lower bound of the hardness on the SVP beta on the classical hardness and the quantum hardness. So for the pruner elimination, we have two scenarios on the progress of the lattice reaction algorithm. So the green line is the state of the line of the state of the algorithm. And the red line is a constructive setting for this curve. So we assume that someone will find a very, very strong lattice reaction algorithm. But as you can see in the classical hardness, in the classical setting, the curve of the sieve algorithm is lower. So the sieve algorithm is better. But in the quantum setting, so you can see the lower bound of the elimination algorithm is better than the sieve algorithm. So this just means that some conservative designers need to update some parameters. So this is the outline of the result. And so we are going to some fat technical part. So this is the outline of the elimination algorithm. Now this algorithm is the core part of the lattice reaction algorithm. For given basis, it can eliminate short lattice vectors. And it is actually a depth first search of the three, depth first search of the elimination three depending on the input basis and the given radius. So one drawback of this algorithm is too slow if we want the exact algorithm. But if we consider the probabilistic algorithm, we know that if we want the probabilistic algorithm, this is much faster than the exact algorithm. So next is the Gaussian heuristics. So consider a lattice in the n dimensional space. And normal shaped object is in the n dimensional space. Then we have the number of lattice point inside of the s is approximated by this fraction. So this approximation can be used to estimate the number of nodes inside of this elimination tree. And also can be used to estimate the cost of the elimination. So under this Gaussian heuristics, the total cost of the tree elimination is approximated by this sigma, this sigma, the sum of this fraction. And here the CK, CK is an object defined by G and a paper that named the cylinder intersection. But this is formally defined by this formula. And so roughly speaking it's the intersection of the K dimensional cylinders in a dimensional space. So you can see the example for three dimensional cylinder intersection at this. The formula is here and the shape is around here. So with this preparation, we know that the cost of the Pruendo elimination is given by the minimum of this optimization problem. So we have constant probability defined by this one. So this means that to find the cost of the Pruendo elimination, we need to solve this optimizing problem. In other words, we have to optimize n variables r1 to rn. So this is not an easy problem. So also this is not an easy problem, but the effect of this strategy is very nice. So for example, if it takes 50% of the success probability, the speed up is about 10 to 10. So this is much faster than the exact algorithm. So this is the advantage of the GNR strategy. But on the other hand, there is some drawbacks. So now there is no efficient method to find the optimal radius r1 to rn. So to tackle this problem, we propose the variant of the closed center method, but it's not enough now. So the shape of the resulting graph looks good, but we don't know if it is an optimal one. And the second drawback is the non-trivial cost of lower bound is known. Of course, we can consider the naive type of lower bound. Naive type of lower bound is possible, but it's not useful. So for this problem, we propose the first lower bound result. So this lower bound is our contribution. So next is experimentally. So what is experimentally? This is a quote to show our lower bound from the geometry. So formally, experimentally is given by this sentence. So consider if we have any object C inside of the n-dimensional ball, and consider it's also a projection onto rk. And suppose the volume of this projection is bounded by m, then the volume of C is bounded by some volume of the C prime. So where C prime is the intersection of the volume and k-cylinder. So what means that if volume of C is big, then the projection must be big. So here the illustrative example is shown in here. For k-2, for the three-dimensional space and the projection in two dimensions. So from this picture, the area of the projection is the same, but the volume of the C is always less than volume of C prime. So this is an addition. So this reveals the reaction of the volumes. So this here is an australic projection. So how to use the experimentally to bound the inner relation? So the first one is a reminder. And we can observe that each, so this Ck is the australic projection of the Cn. So this relation implies that the volume of Cn is bounded by the volume of Cn prime. So where the Cn is some intersection of the ball and k-cylinder. So now we get the upper bound of Cn. So from this relation, so after some calculation we get some, we get the analytic formula for the upper bound of the Cn. But in this upper bound, so we have R prime k, which is the radius of the k-ball, the first volume is the same as equal to Ck. So this relation actually is a lower bound for the Rk prime. So from this relation, so we can have the lower bound for Rk by using the inverse of this function, which is the inverse incomplete base function. So this is some kind of special function, but easily computed by using some library. So actually we can compute its lower bound by using this code, just 10 line, about 10 line in C++ code by using the boost library. And also this is very fast, 10 millisecond and data mistake. So in contrast to find the upper bound, so we need about 900 line in C++ code, and it takes about 10 second or 1 minute. So this shows that the computation of the lower bound is much simpler than the computation of the upper bound now. So this is our first result, and I want to show some experimental result. The radius of tightness is here. So for some setting, the red curve is the upper bound computed by cross entropy, and the blue point lower bound computed by formula. So you can see they are close to each other. And beside the tightness of the radius, the number of nodes are also tight. So you can see the red point as computed upper bound, computed upper bound from the G and R formula, and the blue point computed lower bound. So the gap between the upper bound lower is usually less than 20% in log scale, but it's an experimental observation. So this is our first result. So secondly, we have shown the lower bound for the randomized enamelation. So the G and R extra improving strategy proposes that if we have many random bases, then doing the enamelation with a tiny probability makes it much faster than the single enamelation. So actually the total cost, if we use the many number of bases, the expected total cost is much smaller than the single enamelation. But it was unclear that there is a lower bound or not. So in this paper, we showed that these values are bounded from lower by a constant, and this constant is independent of the number of bases. So this is the outline of our linear lower bound. So we proved that for given bases on the radius, there is a constant CBR. There is a constant CBR, the cost of the enamelation, the proven enamelation with a probability P is bounded lower by P times this constant. And also we have showed this cost over P goes to this constant if the probability goes to 0. So this means our estimation is very tight. And this inequality implies the limitation of the randomization. So if we can use infinitely many bases, the total cost of the extreme pruning strategy is bounded lower by a constant by giving the b minima and r. So the next problem is what is b mean to achieve the lower bound of the C. But now it's still unclear. So to estimate this C, we put two scenarios, the state-of-the-art scenario. The state-of-the-art scenario is that b mean is the HKD basis in practice, HKD basis. So this scenario assumes that the HKD basis is the best basis in the future, a practical best basis in the future. But we have another scenario, the conservative one. The conservative one is the approximate, is from the theory of the ranking basis. So this scenario, the constant CBR is much smaller than that of the HKD. So today no one has to compute it. So the next is us. So we shall describe again. So here's the lower bound cost and the CBR bound in the two scenarios, the state-of-the-art and the conservative one. And I say again, for the quantum scenario, the conservative design need to update some parameters following this graph. So I go to the concrete image. So we have proved the lower bound cost for the GNR pruned enamelation with some probability. And this is the first use of the experimentary to the cryptography, maybe that is crystal. And we show, we update some lower bound cost for the SPP beta using the quantum enum and classical enum and quantum enum. So the next is the final slide. Okay, so we think we have a lot of things to do, but the territorial one is close. So to get tight upper bound or lower bound, and the next one is how to adapt our theory to other algorithms such as the discrete pruning. Okay, thank you for your attention.