 Hello, everyone. My name is Jiang Zhang. The title of this talk is taking the asymmetry of large-spaced public key encryption and signatures. This is a joint work with Yu Yu, Su Qin Fan, Zheng Feng Zhang, and Kang Yang. Okay. This talk has four parts. In the first part, we will give a short introduction. We will show how to take the asymmetry in the design of large-spaced public key encryption and signatures in the second and the third parts. Finally, we will end this talk with a short conclusion. Let's move to the first part. Since the introduction of public key encryption in 1976, the public key cryptography has been widely deployed in real applications. But most of them are based on the hardness of solving integer factorization and discrete logarithm problems. So far, this is good for algorithm on classical computers. However, because of short quantum algorithm, the basis of those great classical schemes is shaken. In fact, if large quantum computer is available, those schemes based on the traditional number theoretical problems will become insecure. Actually, as commented in a report on quantum threat timeline, quantum threat could very well become concrete in next 10 or more years. Facing this threat, the world has started many projects on constructing post-quantum cryptosystems that can resist against quantum attacks. One of the main research direction in this area is large-spaced cryptography. For example, large-spaced schemes are about 41 percent and 46 percent of all submissions in the first and second round of least PQC competition, respectively. The last decade has greatly decreased the great development on large-spaced schemes, but they are still less efficient than their number theoretical counterparts in terms of public key, suffix, or signature size. In particular, for adequate security, large-spaced schemes typically need thousands of bytes, but they later only require at most a few hundred bytes. In fact, reducing the size has become one of the main problems on constructing practical large-spaced schemes. After observing the asymmetry in the design of large-spaced schemes, we propose the asymmetrical MLWE problem and exploit its asymmetry to obtain public key inclusions with shorter public keys and suffixes. Compared to the least-run two submissions at a category 3 security, we achieve the smallest public key and suffix size, both are less than 1,000 bytes. We also propose asymmetric M6 problems for obtaining signature schemes with shorter public keys and signatures. Compared to the least-run two submissions, our scheme has smaller public key and signature size than deletion. Third, we adapt the best-run attacks and their variants to the asymmetrical problems and conduct a thorough analysis on choosing concrete parameters. For example, we consider two variants of the primer attacks and the three variants of the dual attacks to make use of the asymmetry of the underlying problems to reduce the complexity. Let's move to the second part. The learning with errors problem was first introduced by Regev in 2005, which basically asks an algorithm to solve a noisy modular equation. That is B equals A times S plus E, where A is a uniform matrix, S is a uniform vector, and E is chosen from a noisy distribution chi with the parameter eta. The decision of variants asks an algorithm to distinguish the LW tuple from uniform, which is as hard as a computational LW problem. A major variant of the LW problem is the permanent normal form LWE, where the secret S is also chosen from the noisy distribution. The LW problem is proved by as hard as some lattice problems, such as the SIWP and SOVP problems in the worst case. Many lattice-based public key implications are based on the Lindor-Pakerless scheme at the CTRSA in 2011. The public key and the cell flex basically consists of several LWP tuples. Concretely, the public key is generated by randomly choosing a matrix A, vectors S0, E0, from the noisy distribution. Set A, B equals to A times S0 plus E0 as the public key, and S0 as the secret key. Given all bitmills, the sender randomly chooses vectors S1, E1, and E2 from the noisy distribution. Set C0 as the transpose of A times S1 plus E1 and C1 as the inner product of B and S1 plus E2 plus an encoding of mu that is mu times q over 2. The receiver compiles u equal to C1 minus the inner product of S0 and C0. One can easily prove the security of this scheme by applying the LW assumption plus the correctness of this scheme requires the decryption noise E prime which is determined by S0, S1, E0, E1, and E2 is smaller than q over 4. Clearly, the decryption noise is mainly dominated by the inner product of E0 and S1 and S0 and E1. When it comes to choosing concrete parameters, we have two conflict constraints on the parameter eta. On the first hand, the correctness is determined by the function g alpha n which is the size of the decryption noise and is expected to be as small as possible. This needs to set a smaller eta. On the other hand, the security is related to the function f alpha n which is the size of the decrypt vectors s and u and is expected to be as large as possible. This needs to be to a larger eta ideally for targeted security level, kappa. We hope to pick an eta for simulating laterally achieving two to the matters kappa decryption error and two to the kappa security. But typically, there does not exist such an optimal eta. If we set the parameter for achieving a given security, there usually exists much room for the decryption noise. In other words, the size of the decryption noise might be far smaller than q over 4. Then we can use the compression function to obtain shorter public key which basically cut off the lower bits of the public key and suffix. This operation can reduce the size of public key and suffix but it will result in a much larger decryption noise. This is okay as long as the final decryption noise is still less than q over 4. Unlike the case without compression after compressing the public key and suffix, we find that the lower of s and u in the decryption noise is asymmetrical. In particular if the x part that is introduced by the compression, the public key and suffix is much larger than the e part. Then the size of u is less significant than z of s in the decryption noise, e prime. In other words, the size of u will significantly reduce the decryption noise but reducing or increasing the size of e will not change the decryption noise too much. This feature inspires us to reduce the size of s as much as possible for smaller decryption noises that increase the size of e for maintaining the security. Then we can further make use of this as metric to obtain new skins using different parameters eta1 and eta2 for the secret s and e as well as different modules p0, p1 and p2 for the compression of public key and suffix. By doing this we can flexibly choose the parameters to achieve short part public key and suffix while at the same time not significantly affect the security. Specifically we can set smaller eta1, p0, p1 and p2 for short public key and suffix but larger eta2 for maintaining the security. Now the concrete security is based on as metric variant of the standard LWE where the secret key and the noise are chosen from distributions with different parameters eta1 and eta2 obviously when eta1 is equal to eta2 this is exactly the standard LWE problem where contact of other radices and best lower attacks the experiment shows that the security remains unchanged if we keep the product of eta1 and eta2 unchanged. For example for the standard LWE with eta1 equal to eta2 equal to 2 this product of eta1 and eta2 is 4. For most of the same security we can also set eta1 to be 1 and eta2 to be 4 such that the product of eta1 and eta2 is still 4. This basically gives us many choices of eta1 and eta2 for making better trade-off between equations and the security. By instantiating the above scheme with as metric variant of modular LWE we obtain shorter public key and suffix for example for the target 108 bit security both the public key and suffix are less if than 1,000 bytes and are smaller than this run-to-submission kyber for better trade-off between equations and the security. Now let's move into this third part this small integer solution problem was first considered by Atai in 1996. This problem ask as an algorithm give you a matrix A or vector T and a real gamma to output a vector X such that A times X equals to T and the norm of X is at most gamma when T equals to 0 this is a standard 6th problem otherwise it is an inhomodulous 6th problem it is choosing uniformly at random. Both 6th and I6th problems are probably as hard as some noticed problems such as SIWP problem in the worst case. Now we consider the identification scheme from 6th by Leibovsky at the Asia Club in 2009. Let SATA be a set with absolute value less larger than ATA the secret key is chosen from SATA and the public key T equals to 8 times X. To convert the verifier that has the secret key X the prover first randomly choose a vector Y from SATA sends W equals to 8 times Y to the verifier. The verifier randomly choose a challenge for simplicity. Here we only consider C to be a bit. One can easily extend the protocol to bit strength of any fixed lens. Finally the prover returns Z equals to Y plus C equals to X. To protect the secret key X we cannot direct the output Z instead we only output Z if the infinity sum of Z is not bigger than gamma minus beta where beta is the maximum size of Z times X in this case for any choices of X and Z we have a vector Y such that Z equals to Y plus Z times X. This means that X is perfectly hidden in Z. By applying the fade-shammy transform we obtain a signature from X by hushing the vector W to obtain a challenge Z. In order to choose conflict parameters we have conflict constraints on the parameters M, bet, gamma. For example for security we expect the gamma to be small but for the computational efficiency we need gamma to be big such that the expected number of repetition is small. In practice we cannot obtain optimal choices of parameters for all goals. In private we have signature scheme for the security of Z's problem we need a very large M which gives a very large signature size. To reduce the signature size we can switch this underlying hard problem from Z to LWE by setting the public key as A times S plus E where S and E are chosen along S eta. In fact the LWE problem can be seen as a special size problem where the matrix is a uniform matrix followed by an identity matrix. In this view this change is intuitive and direct. After changing to the LWE problem a use feature is that the vector W equals to A times Y1 plus Y2 which is mainly dominated by A times Y1 as Y2 is smaller. In particular the higher bits of W are most equal to the higher bits of A times Y1. This also holds for the vector view. At CTRSA 2014 by Anagabris found that one can utilize this feature to further reduce the size of signatures by removing the part related to Y2. For this goal we need to add one more rejection on the lower bits of W minus one C times E such that it is not bigger not too bigger and will not affect the higher bits of W. Now the expected number of repetition is dominated by two size. Specifically we can reduce it by either increasing gamma 1 or gamma 2. Notice that the signature size is also dominated by gamma 1 but it is irrelevant to gamma 2. This isometric feature allows us to use a smaller gamma 1 for smaller signature size but a larger gamma 2 for maintaining the expected number of repetitions to fully utilize this feature. We also switch the standard LW problem to its asymmetric version such that the secret S and E are chosen from the distributions with different parameters, eta 1 and eta 2. This allows us to use different beta 1 and beta 2 for rejection. Beta 1 is the maximum size of C times X and beta 2 is the maximum size of C times E. For security, in addition to the asymmetrical LW E, we also did the asymmetrical variant of SACE which basically splits the solution into two parts and put different constraints gamma 1 and gamma 2 on the two parts. By doing this, the expected number of repetition basically has two independent parts. This finally allows us to choose one for short signature and appropriately chooses other parameters to maintain the other features. By instantiating the above scheme with the asymmetrical variant of modular LW E and modular SACE we obtain a scheme with shorter public key and signatures. In particular, for most of the same computational efficiency our scheme has shorter key and signature than the least run-to-submission deletion. This finishes our third part. Here is a short conclusion. We obtain public key encryption with shorter public key and suffix by taking the design of public key encryption using asymmetrical MLW E. We also obtain signature schemes with shorter public key and signatures by taking the design of signatures using asymmetrical MLW E and M6. We conduct subtle analysis on choosing concrete parameters by adapting the best lower attacks to both problems. This is the end of this talk. Thanks for your attention.