 Hello everyone. Welcome to my talk. Today I'm going to talk about our new framework and realization for password-based K-Exchange code code. We'll talk about the introduction followed by the framework and finally instantiation and we show the efficiency ones. We go to the introduction. K-Exchange is a very well-known topic. It studies two or more parties and on how to establish a common security. We study password-based K-Exchange. In this case, two parties only share a short password. The challenge for this problem is that the attacker could obtain a short fixed function value of the password pi and then he can exhaust the password space to identify the correct pi that's consistent with the function value. This is called offline K-Exchange real-time. This password-based K-Exchange was first started by Baylevin and married in 1992. The first provable secure construction with regress model is due to Baylor point table and Logway in 2000. But their solution was in the London org mode. After that, there's many authors study this problem. We study the same problem, but in the lattice setting. In fact, in the literature, there are several frameworks were proposed. These frameworks are realizable in the lattice setting. Also, there's a ring LWE-based pegs that's proposed by Dean in 2017. But their solution was in the London org mode. So we study pegs only in the standard mode. So we note that the previous solutions seem to very much rely on the CCA secure encrypting. So we will propose our framework will based on the primitives that's without the CCA secure encrypting or it's a similar variance. And we prove the security in the standard mode. We also visualize our framework from LWE and LWE respectively. And we show that they are much more efficient than the previous lattice based pegs. So then we talk about our framework. The framework has three ingredients. The first one is one message K reconciliation. In this setting, the two parties Alice and Bob are initialized with a similar secret. Alice with D and Bob with D prime. They are they are short distance. So they want to share a common secret. To do this, Alice use a function to compute a message and a secret sign. The message is send to Bob, the sigma. The output sign as the common secret. So this is the Bob receives the sigma. Then he uses D prime and the sigma to recover the common secret. So this is a one message K reconciliation scheme. We will construct a very efficient scheme. But before that, we introduce one observation. Suppose D and D prime is the number in the modular ruin 401. And suppose the D prime and D are close so that the difference belongs to the interval minus eight to eight. So we consider a integer of eight bits, where the A four A three is constant zero one. So observations that F plus D prime minus D modular 401 equals the same expression without the modular. So the crucial part is that the expression has the first the highest three bits a seven a six a five. It remains the same as that of F. So why is this? The reason is that zero one a two a one a zero belongs to interval eight to 16. D prime minus D belongs to interval minus eight to eight. So added together is the number belongs to interval zero to 24. So this is five bits. So they added together, they cannot change the bit a five above our expression. This gives our motivation to construct the reconciliation. In this case, Alice has a secret D. So she can sample the number F randomly except A three A four that's a constant one zero. So she sends the sigma equals F plus D modular 401 to Bob. Bob has a secret D prime that's close to D. So when Bob receives sigma, he compute sigma minus D modular 401, which as we have observed in the previous slides is F plus D minus D is an integer domain. So this remains the same for the highest three bits a seven a six a five. So they can share the a seven a six a five, the three bits. So why is this? Why is it secure? Right? Because the only message is sigma from Alice to Bob. So the secret F in sigma is masked by the one compact D, where the D is usually for the one done over Z 401. Then we talk about the next primitive. That's K for the message authentic encode. This is a Mac that has that has a normal property for the message authentic encode where Alice and Bob share a secret key K. So Alice can compute the authentic encode by just use a function F to compute F K of M, which is eta, then send it to Bob. Bob with a secret key K can verify the eta just by recomputing the send code. So the key fuzzy is we have we require that this send code have alternative verification function. That's a fuzzy verification phi function. So in this case, Bob has a secret that's not exactly the key, which but it's close to K. And when Bob we save the same thing code, eta and M. So you can use the phi K prime to verify the syndication code under the message. The for this to this should be accept the syndicating code as well as K and K prime is close. Of course, for this fuzzy verification to be meaningful, you have to reject the attacker's forgery. In this case, we consider the attacker that has a one time security, the tackle with message and the syndicating code cannot forge a new syndicating code for some another message in prime. So we require that they Bob always reject the forged syndicating code. As long as K prime is close to K. Now, we provide a construction for this syndicating code. We will use the error correcting code C with large timing distance. And also another cleanly system has to suppose the sacred key is D vector. That's endless vector over the Q. So for message M, we consider the code word of H M, which is a subset of index set of size n. So the syndicating code is just a subvector vector of D corresponding to the code word of H M. So that's a index. There's a index code word is the index that's selected by the messages and he can code in the sacred key D. So the verification with another sacred vector D prime that's close to D. It's just simply recompute the syndicating code under the sacred key D prime and compare with the input syndicating code U to see whether they are close or not. Then we introduce the approximate smooth project hash. This is the third primitive that we require for the our framework. This primitive we build. We build this primitive over the commitment. Commitment you given the input pi, the commitment where output or commitment value y and the weightless tau. So y is the commitment of pi with weightless tau. So the decommitment is tau and pi. So the commitment scheme has high input property that requires y shouldn't decode anything about the input pi. Also it should be binding. That means no one can decommit or string y to different input pipeline and pi. So then we are going to introduce the approximate smooth projective hash. This requires to introduce we need to detail the tool function h and the authoritative hash h hat. For h given the sacred key k and input the pi under or variable y from the commitment space the projective hash h output h of k pi y. The authoritative projective hash h hat. In this case if the y is the commitment of pi with weightless tau then the projective hash can be approximated as h hat of tau and no function value alpha k. Where alpha k is the function of the sacred key k. And which is this is called a projection key. So we see an example for this ASPH. The commitment is to use a regular LWE tau board AS plus x and also use a random vector h. We set a and h as a commitment of a public key. So the commitment to an input pi is y equals AS plus x plus h pi. Where h pi is just the transformation of pi. And the weightless is s x. So this commitment is hiding y because the LWE or something. It's binding. So for this we show that if s is random then y can anyway can only be written as the commitment format for only one pi that has a short x for some s. So the projective hash with sacred O which is Gaussian matrix is O transpose times y minus the password information the input information h pi. Normally with above commitment format for the y this will be O transpose times AS plus O transpose x. Then we want to introduce we want to use O transpose AS as a torative projective hash. To do this we define the projection key as r for O equals O transpose times A. So for the weightless tau equals s x the torative hash with the input tau and r for O is equal to O transpose times AS. So this is approximately equal to the projective hash because O is Gaussian x is short. So O transpose times x is short. So this is the the similar version for the ring LWE case. Even this looks similar but is quite different in the technical argument. Then we are going to introduce our peak framework. This has three which can introduce this for three basic protocols. The first one is approximate K establishment. So in this case the Alice and Bob will try to base some password to establish your common and approximately equal secret. In this case Bob starts first. He conveyed a password and output y. Well the permittment is y and the weightless is tau 1. He sent y to Alice. Alice then sample secret K and compute the projective hash of h over h1 k pi y and send the projection key to Bob. Bob then uses tau 1 and the projection key to compute your torative hash. So normally these two versions are equal with this approximately equal secret key. So Alice and Bob can use one message K reconciliation to agree on common secret which is sign. So currently we still didn't talk about the identity verification. To be secure they have to syndicate each other. To do this Alice and Bob start from the common secret sign. So they need to use another smooth projective hash which has alpha 2 h hat and h. Alice goes first. She commits using the common secret sign as a random list. She commits to pi, generates a committment w and with weightless tau 2 and sends w to Bob. Bob has the same sign. So he can verify this w and recompute the tau 2 and w plan. Then they can use generate this K as a torative hash. Based on K they can syndicate each other for the traffic. The crucial part is that for this to be secure we have to assure the property of the QF MAC. In this case the secret key K has to be independent of w and weight because weight and w is known to attacker. So this is called strong smooth list property of ASPH. So this is a crucial point that allows us to use the regular commitment for this commitment to. This also can be much more efficient than the previous syndication that's based on CCA secure latency clipping. This is a complete protocol that pitch back the three protocols into one. Then we consider the intangulation. So for this LWE protocol that's basically plug all the sends together using ASAW from LWE. This is a protocol used from the ring LWE based ASPH. So this is the efficiency of our protocol. We can see that it's more efficient than the previous protocol. Finally we also conduct an experiment for our ring LWE based protocol. It's based on U-Bump operating system using C++ with NTR package. We can see that under the regular parameter it has a reasonable timing efficiency and the communication efficiency. Thank you.