 Hello. Welcome to my talk. I will present a paper entitled faster due lattice attacks for solving LWE with applications to crystals. This is a joint work with Thomas Jonsson. We are from Lund University. Now I give the outline of the paper. I will start with an introduction. Then I will present the new algorithm and its applications. I will later present the experimental verification of the algorithm and define the conclusions of the paper. Now let's start with the introduction. We know that we are facing threats from quantum computers. The currently used public key crypto systems based on factoring discrete log will be broken by short algorithm if a sufficiently large quantum computer is built. We already see rapid advances in building quantum computers recently. For instance, Google has claimed its quantum supremacy. It's important to study post-quantum cryptography to find new solutions that can resist attacks from large quantum computers. The core effort is the NIST post-quantum cryptography project to find replacements for public key encryption and signature standards. Now this process is in its third round and seven finalists are selected. Among them, the majority are lattice based. For instance, three out of four CAM and PKE finalists are lattice based such as cyber, kyber and N2. Also two out of three signature finalists are lattice based, Falcon and Delissima. Here, kyber and Delissima are from a package called crystals. So why lattice based crypto systems are so attractive? One main reason is that we can have a secure reduction to hard lattice problems means that we can achieve probable security. One main branch in lattice based cryptography is crypto systems based on learning with errors and its variance. For instance, LWR and Z-Ring and modular versions. For crypto based on LWR, we have average case to worst case reduction and based on LWR, we can build very efficient cryptographic primitives and Z-applications can be versatile. For instance, we can achieve advanced construction such as fully homophobic encryption based on LW. Then the concrete security of this crypto system can be related to solving an LW instance. So it's important to study the concrete complexity of solving LW. So what is the learning with errors problem? Usually this problem is defined with an LW oracle with its parameters NQ and noise distribution kind. We fix a secret vector S and this oracle will uniformly pick a vector A. And it picks a noise from this noise distribution and this oracle will output the pair A and B, where B is the end product of AS plus this noise E. So for these PQC candidates, usually this noise distribution chi is modeled or approximated as a discrete Gaussian with main C run standard deviation sigma n. The secret S is sampled from the same distribution or from another distribution that can be even smaller or sparser. So we can formulate the secret distribution as a discrete Gaussian with main 0 and the standard deviation sigma S. We are here sigma n is C times sigma S. For such candidates, the number of samples is usually limited. It could be slightly larger than N. The known approaches for solving LW can be characterized into three main categories. The first one is algebraic approaches including AuroraGa and its extension of using Grubiner basis algorithms. So this type of algorithms are of a symptotic interest. The second branch is the combinatorial approaches like PKW style algorithms which generally require a large number of samples. The most relevant attacks is lattice based attacks including primal and dual attacks. Because dual attacks is the main target of this paper so we introduce it further here. So the aim of dual attacks is to find a short vector UV in this dual lattice. Then giving a sequence of LW instances, then we compute the inner product of W and B, then we get this term. We see that this is small. Here small means that the standard deviation is small because you see S and E are sampled from sparse distributions and VW is short. So this distribution can be distinguished from the uniform distribution. When starting security parameters for lattice based primitives, we usually study BKZ reduction algorithm which block wisely called oracles to solve a sort of vector problem. Asymptotically the best SVP oracles are implemented by CV. So we have this notation because beta D here beta is SVP dimension and D is the lattice dimension. We have an important model to estimate the cost of SVP oracles called coral SVP proposed in this famous new hope paper. The main idea is to keep the main complex term and discard the sub exponential term. So in this model, CV can produce these many short vectors and the classic CV complexity is 220.292 beta and the quantum CV complexity is this. So this coral SVP model is very good because we can compile the security strengths of different lattice based proposals. But this model is still just an approximation because the discarded sub exponential complex terms can be significant. We have a new research problem that given cost of numbers represented in the coral SVP model. How to determine if this number meets the security requirements from NIST, which is represented in the gate count metric. In Europe 2018, Lucas showed a significant gain called dimensions for free. Means that SVP dimension beta could be solved using a sieve in a smaller dimension. And later, Albusheidel in Asia Cup 2020 firstly started the classic and quantum complexity of sieving in the RAM model. Here RAM means random access machine. So this research allows us to study the concrete complexity of lattice reduction algorithms without removing the sub exponential terms. Based on this research in the official documents of round three kyber and delisema, the designers studied the beyond coral SVP harness. Means the classic gate count metric in the RAM model. They also take into consideration of progressive sieving, but they only consider primal lattice attacks. They dismissed the dual lattice attacks because first most of those vectors are larger by a factor of square root of 4 over 3. Secondly, the trick of exploiting all those vectors is not compatible with the dimension for free trick. This sentence is cited from the round three official document of delisema. Here we call the trick of exploiting all those vectors the MSV game. Our main research question is should we dismiss dual lattice attacks when selecting parameters in lattice based cryptography? Here we focus on the RAM model. Our answer is no. Actually, we can exploit both gains, both the D4F gain and the MSV game. And we can still outperform primal attacks even though the short vectors are larger by a factor of this compared with the shortest vector. We also show better classical and quantum attack results in the core SVP model. Please read the paper. And other memory models are beyond the scope of the paper. Now I will present the new algorithm. Now we present the new FFT distinguisher. This distinguisher is similar to the famous Blanchard-Beijer attack on ECDSA when the alphabetic size is too large. He used a reduced size of signal points. Similarly, we reduce the alphabet size for FFT from Q to gamma, where gamma is an invertible element in the Q, thus the FFT dimension can be larger. Also, a standard deviation of the remaining noise from FFT can be reduced by a factor of gamma. Now I will give a more mathematical description. We will write the LWE sample as A j hat and B j by write A j hat to be gamma times A j mod Q. Then this is equivalent to write S hat to be inverse of gamma times S. For example, we assume gamma to be 2, then the inverse of gamma is Q plus 1 over 2. Then we write this equation in the integer form. So we compute this F function for all possible S modulo 2. We see that we operate on T positions, so there are 2 to T possibilities. We know that for the right guess, the computed value is of this format. So if we can use some reduction algorithms to reduce A i j hat to be small, then we know that this variable is small because the standard deviations of the random variables S i and E j are small. Otherwise, because Q is very large, then this distribution is close to uniform for a wrong guess. We know that the computation can be accelerated by FFT. Now I could introduce the framework of the new dual lattice attacks. In the first step, we map the entries in the matrix A. So we write A into 3 submetrics A0, A1 hat and A2. A2 corresponds to the last T1 columns and A1 hat corresponds to the next T columns. We write A1 to be gamma times A1 hat. In the second step, we find sufficiently many short vectors in the lattice by lattice reduction. The lattice is this dual lattice defined here. So we see that the lattice at dimension D is m plus n minus T1 and the volume of this value with high probability. In the next step, we guess the last T1 positions and we also use the new FFT procedure to guess the last T unknown positions. We see that for the exhaustive gas, we reduce the volume by Q to T1 and for FFT gas, we reduce the volume by gamma to T. One main reason that the designers dismiss the dual attacks is that they think it's hard to exploit both the D4F gain and the MSV gain. Here we present a new two-step lattice reduction framework to achieve these both gains. Firstly, in the first step, we do because the reduction with size beta and then we obtain a reduced basis with the short vector B0 as the first vector in the basis. Then we look at the sub-letters L' generated by the first beta-zero vectors in the reduced basis and we perform a sieving step in this lattice and get a list of short vectors of size this. Here this λ1L' is the shortest vector in this lattice L' but we know that in this lattice L' we already have a short vector beta-zero, a B-zero. So this shortest vector will be no larger than the size of B-zero. This value can be concretely estimated by this asymmetric work and also the time complexity of one reduction can be estimated similar to the methods in the official documents of crystals. In this estimation, the sieving costs are concretely estimated using this asymmetric work. We know that this can be bounded by the size of B-zero. On the other hand, we can also use Gaussian heuristics to compute this value. These two estimations will lead to very similar complex numbers. Usually BKC includes calling an SVP oracle for many times so we can sieve in the second step with a larger dimension to balance the cost. So we have beta-zero is larger than beta-prime means that we have a few dimensions for three in the second step. So intuitively we have three dimensions in both steps. The D4F gain can be estimated by two models. The first model is first proposed by Ducas using asymptotics. So this is called a symptotic model. Later, Albusch et al. shows that GCCC framework can achieve larger dimension for free while a technique called on-the-fly lifting. So we get an extrapolation model from experimental data. We will use both models to study the concrete complexity. So now we can present the main complexity theorem. We see that the time complexity of the new algorithm can be estimated as C over P-zero, where P-zero is the probability that the partial secret is one of the guest vectors. We can see that C consists of two parts. This part is from lattice reduction and this part is from gas and FFT. So we see that these two parts are additive, but here gas and FFT can reduce the volume by a factor of Q2T1 times gamma to T. So this is why this algorithm cannot form the previous dual tax. Here the sample complexity is estimated by this formula and this gamma T times N gas is the number of hypothesis. So this formula is from information theory for hypothesis testing. We set C-zero to B4, which will be experimentally verified later. Next, I will introduce applications of the new dual algorithm. So first I will introduce applications to Christos Keiber. We see that this table shows the gate complexity comparison and the course is given in log 2 of the operations and here gamma is 2. We show the claim security levels and also we show the complexity numbers for the new dual lattice attacks in a Symptotic D4F model and in the Jessica D4F model. We see that the gains are generally big. For instance, for Keiber 1024 in the Jessica model, we achieve a gain of almost 15 bits. According to the analysis, we see that some schemes are really on the edge and some schemes offer a rather limited security margin. For Keiber 768 in the Jessica model, we see that this scheme has two bits of security loss. Similarly, we see significant improvements when applying the new algorithm to security parameters of Christos Delissima. We see Delissima 3 and Delissima 5 offers limited security margins. We also apply the new algorithm to solving some FHE parameters. This table shows a complex comparison for the security parameters in the homomorphic encryption standardization draft aiming for classical security. Here N is 1024 and we choose gamma to be 3. The secret distribution is the uniform distribution from the set minus 1, 0 and 1. And the standard deviation of the noise variable is 3.2. We see that we could solve some parameter sets faster than its claimed security level. Here the Jessica D4F model is assumed. We next present experimental verification. We know the D4F gain and the MSV gain have been extensively verified. So we mainly verify the data complexity of the new FFT distinguisher. We first generate the samples in the queue of this form. Here each AIG hat was generated from a discrete Gaussian chi sigma 1. EJ was generated from another discrete Gaussian chi sigma 2. We set Q to be 3329 and SI is generated from a uniform distribution in Z2. We implement the new distinguisher to recover S with dimension T. This table shows our experimental data. We have two sets of experiments with different sigma 1 and sigma 2. We pick T to be 8, 12 and 16 respectively and we choose C0 to be 4 to 1. We compute the sample complexity via the theoretical estimation and we test it 1000 times for each parameter. Then we compute the success rate. We see that the theoretical estimation is accurate and setting C0 to be 4 can ensure a high success probability. In our experiments the success rate is always 100% when C0 is 4. Meaning that we succeeded 1000 times in 10010. For a fixed C0, the success probability generally increased when T became larger. We now conclude the work. We have proposed a faster dual lattice attack with two main novel contributions. Firstly, we proposed a new relation by a style FFT distinguisher that can reduce the volume of the used dual lattice. Secondly, we proposed a new two-step lattice reduction strategy allowing us to exploit both the default F gain and the gain of 1C when producing many sort vectors. We applied this new attack to crystals and obtained significant gains in the RAM model. For instance, these parameter sets are either all for very low security margin or they are really on the latch edge. Assuming for the Jessica D4F model, we see that Kyber 768 has two bits of security loss. With this new attack, we can also solve certain FHD parameters faster than the claimed security levels in the RAM model. Actually, this new attack has very wide applications in lattice-based crypto. In the extended version of the paper, we applied the new attack to ANCHO and obtained sharper results for these parameter sets. So this parameter set claim 209 bits of security in the RAM model means that they can only offer two bits of security margin regarding the primal attacks. Our new dual attack can reduce 8 to 10 bits further depending on the selected D4F model. So the complexity number is estimated to be about 200 bits, meaning that this parameter set is below the security level of NIST 3 in the both D4F models. Thank you for your attention.