 Hello, everyone. I'm going to talk about our work, a non-heuristic approach to time-space trade-offs and the optimizations for BKW. First, let us recall the AAPM problem, which is about solving linear congruence in presence of noise, where in the search version of the problem, it challenges to find out the secret X, given the noisy code word, where X and A are uniformly random and A is public, and the noise follows the coordinate-wise independent bandwidth distribution, where mu is noise-rate. In cryptography, it is more convenient to use the so-called Decision European to facilitate secretive proofs, and both variants are polynomially equivalent. Therefore, for secret analysis, we can only consider the search version. The AAPM problem can be categorized by noise. When the noise-rate mu is a constant, the BKW algorithm by Blam, Kalein and Wasserman, self-sitting exponential time and sample complexities. Based on BKW, Rybyshevsky gave a meaningful trade-off, which compresses the sample complexity to polynomial, with a slightly lifted-up time complexity. And there are other variants of AAPM, where the noise-rate decreases with respect to the dimension, but they are not the focus of our paper. Let's recall the original BKW algorithm. It works in iterations. In each iteration, it classifies the samples based on the values of the first B bit of the coefficient vector, and within each group, it cancels the first B bit by subtracting them with the first vector. Therefore, each iteration reduces the dimension by B bit, and reduces the number of samples by 2 to the B, and doubles the noise-rate. We summarized the complexities of the BKW algorithm. To minimize time and sample, by choosing the block size B properly, we ended up with the exponential complexities, which are roughly 2 to the N of a log N. And it remains open if we can do trade-offs between space and time. This is especially meaningful when doing, say, security evaluations of AAPM-based cryptosystems. If the time complexity and the space consumption is both 2 to the 80, it's not a practical estimate at all, because in practice we don't have a memory of size 2 to the 80. So in addition, we didn't know if we can improve the exponential vector, which is highlighted corresponding to the number of iterations. Further, we would like a more efficient approach to reduce sample complexity. In this work, we give a tree-based structure for the BKW algorithm. The original AAPM samples were divided into several subsets. These subsets are mutually independent. Within each subset, the samples are only required to be pairwise independent. This will enable sample optimization, because we can use a small number of independent samples to generate a much larger number of pairwise independent samples. And in the meantime, the pairwise independent surfaces to give a rigorous estimate of the complexities. And during each iteration, we pick, say, C samples, one from each parent's node, and to cancel out the first B coordinates. And we do this all the way down to the root node, where we get many candidates for a single coordinate. Therefore, no repetition is needed. Based on our tree-based algorithm, we get our first result, which is a trade-off between time and space. Here, this C is a tunable parameter. By changing different values for C, we get different trade-offs between time and space. And compared with the previous work, we optimize time and sample complexity by a sub-exponential factor, which accounts for the number of reiterations in the original BKW algorithm. Further, compared with Rybyshevsky's algorithm, the time complexity is also optimized by a sub-exponential factor. That is a very brief introduction of our work. Thank you for your time.