 Thank you. Thank you for my introduction. Good afternoon. My talk is about a programming hash function from lattice and its application to short signatures and IBE with small key size. This is a joint work with Jiang Zhang from State Key Lovers of Cryptology and Eugene from State Key Lovers of Information Security. This outline of my talk, firstly, I will give an introduction to a program of hash function and then they will define the structure from lattice and set a particular application to signatures and IBE. First, program hash functions is a key to group hash function which map group set X to a group G. This function has two modes. The normal mode, first, there is a HG algorithm which generates a key. And then with this key, there is a HVALU algorithm which maps an element from this set to an element from this group. This is a normal mode. There is a chip down mode, which the algorithm HG algorithm takes as input G and H which are generators of this group G and output key prime together with chip dot TD. And then the HVALU algorithm takes input this chip dot TD and maps this X to a pair of AX and BX such that the HVALU with the key prime on this X equals to G to AX times H to BX. Of course, it requires that the two modes, the key generated should be statically closed. And we see a PHF is a UV PHF if for any X1 to XU, where 1 to YV satisfying XI is different from YG. And the probability of AXI is 0, while AYG is not 0, is bigger than 1 over poly for some polynomial. A PHF is useful when the discrete logarithm is harder over the group G. For example, we assume the discrete of log HG, X is unknown. Then the problem we consider is compute Fx such that the HVALU Z equals to H to the power of Fx. This problem is computationally feasible in the normal mode. She says it's actually computed as a discrete logarithm. But in the tributary mode, we have Z equals to this value for some unknown AX and BX. So if AX is 0, then the problem is easy. She says Fx is just BX. If AX is not 0, the problem is harder. She says Fx equals to AX times X plus BX with unknown X. So the magic problem for secure proof of PHF is that both the probability of AX equals to 0 and AX not 0 are noticeable. So in this case, it's useful for simulator security game. And the second case is useful for inbinding harder problems. Now let me give a brief statement of the art. The PHF was first introduced by Hopkins and Kils in crypto 2008. They also constructed 2.1 PHF and left an open problem to find this PHF in the lattice-based contract. And in Hopkins and Kils, then in 2011, they used a cover free set to construct a U1 PHF. And in CTRC, Yamada, Hanaka, and Kohliya combined two dimensional cover free sets with the bilinear groups to reduce the group elements. And in crypto 2012, Hanaka showed that it is an impossible to construct an algebraic U1 PHF of a primal order groups in a black box way such that its key has less than U group elements. And in crypto 2013, Freya constructed a polywine PHF from multi-linear groups. And in crypto last year, Carolina introduced a symmetric PHF over bilinear groups. In summary, existing PHF constructions seemed specific to DL groups. And it was still unclear how to essentially to PHF from lattice, for example, with which groups and with works with which hard problems. So this work, we give a formalization of lattice-based PHF by considering the algebraic properties of lattice and aimed at capturing the partition proof technique of PHF. We also gave a two concrete construction of a lattice-based PHF. One is actually abstraction from existing constructions. And the other is a normal combination of lattice and a cover free sets. We also give a generic construction of signatures and IBEs, additive-based encryptions, which provide a way to unify existing schemes and also give new and efficient constructions. Now we move to the next part. As we know, for matrix A and a vector U, we can define the two lattice. And there's a six problem called a small integer solution problem. It says that given matrix A, find a short 9-0 vector S such that S equals to 0 modulo q. And a homogeneous six problem says that given A and U find a short S such that S equals to U modulo q. Both problems are as hard as the SIVP problem in the worst case as we showed at high GPV, et cetera. And we see a matrix A is a trapezoidal matrix. If there exists a known trapezoidal that allows an efficient algorithm, sample D, to sample short vectors from these lattice. In the paper, we use the MP trapezoidal, which for k equals to log q, there is a public known trapezoidal matrix G together with a function G inverse set to the following two conditions. Also, we use some other useful facts, could be the literature. The first one is there is an algorithm or chip G that out of the almost uniform matrix A together with a trapezoidal of A. The second fact is that for NAB, if A is a trapezoidal matrix, so does A can connect it with B. A and B also has also in the trapezoidal matrix. The third one is the A for NAB and a small R and a invertible S, if B is a trapezoidal matrix, so does this matrix. Now the definition of the structure from lattice is either a keyed hash function which maps a set X to a Zq to a n-type m. The normal mode is Euron. First, the HG now generates a key key and then there is a H value takes the input key on X and my output is Z, which is a matrix in Z set. In Z set and in the trapezoidal mode, we need a uniformly random matrix A and a trapezoidal matrix B. The HG algorithm takes the input A and B and output a key prime and a trapezoidal TD. And then with this TD for any X in Z set, it maps to a output of X and X which are two metrics such as that the output is equal to ARX plus XB. Also, we need a statical closed case in the two mode. Should be statical closed. And we define UV, PSHF, if for any X1 to XU or Y1 to YV certifying XI different from YG, the probability of SXI is zero. Well, SYG is invertible is no less than one over a polycat for some polynomial. This is a definition of a lattice-based PSHF. Then we give two constructions of PSHF from lattice. The first one, let E be a deterministic encoding from a set X to this vector of metrics. The HG algorithm first randomly choose an L plus Y matrix and output them as the key. The HE value algorithm simply first maps the X with the encode into a matrix C1 to CL and then returns Z equals to a zero plus the summation of CRI for I from one to L. Actually, this function has actually been implicitly used to control both the signatures and the encryptions. Our second construction will use a V cover free set over this domain. This domain means the set of numbers from zero to M minus Y. A family CF, each element is a CFX, which is a subset of an X is taken from this set. This family is a set of V cover free if for any subset of L and the size is no more than V. Therefore, all Y does not belong to S. CFY is not covered by the union of CFX. And the CF is a set of eight uniform if each element has a set of eight. For example, for N equals to five, this CF is three uniform to cover free set. And it has to be showing that there is a deterministic activity algorithm that on input into your L and V returns an eight uniform V cover free set for this N and eight. The key idea of the second type two construction is notice that a V cover free set actually give a partition of L either since that for any S subset of L, as the size is no more than V, and Y does not belong to S, then there exists an element of this star in CFY and this star does not belong to the union of CFX. Since CF is defined over this domain, if we assign a unique key material for each element in this set, it seems safe to embed some trapdoor by guessing the target element of this star. However, such a direct user will result in a key size which is proportional to N as this is even worse than the type one construction. Our solution is to use the binary representation of A and let mu equals to log N for every one from zero to a mu minus one we assign a matrix for the S proposition. To do so, we define a function factor kz i which is defined by if i equals to mu minus one, the function equals to a mu minus one, minus b mu minus one g, and otherwise it equals to a i minus b i g times g inverse from kz i plus one, by this, this inversion. And then for the binary representation of z, b zero to a mu minus one, we can define this function and we get we just construct a unique matrix as a factor kz zero. Let this star be some target element and let b zero star, b mu minus one star be the binary representation of this star. If we set a i as this form, then we get a z is actually a r z plus s z g for some r z and s z has this form and it's a product of one minus b i star minus b i times i n. So for this, if z is different from z star, then we have there is always one term equals to zero, so s z is zero. If z does not equal to z star and if z equals to z star, then we can deduce that s z equals to a minus one to the power of z times i n where z is the number of ones in the binary representation of z star. By this binary representation, we can deduce the key set to log n and by this factor and s z has this form, we can keep the partition property for P H F. So here is a type two construction of P H F from lattice. Here mu is equals to log n and equals to log q. And the SG algorithm first let me choose a hat and a i from this set and return the k equals to a hat a i for i from zero to mu minus one. And the actual value algorithm take a input on x and run returns a hat plus sigma from k z zero for z for z z c f x. We can prove that actually it's actually a one way P H F that is for a random z star from this set if a i is set as before and set a hat equals to this form then we have s x is a zero and s y g equals to this form who is probably no less than y over n for any y v and x does not belong to different form of y g. This is simply because the cover free set who we have just mentioned before. So let me show how to apply P H F from lattice to construct a signature and an identity based in cryptic schemes. A signature scheme will consider three parts the first is the k gene algorithm. This algorithm will first run the chaff gene algorithm to output a matrix A together with a chapter R. Then we run the h gene algorithm of the P H F and get a key kai and we also need a uniform vector u and a return a kai u as a verification key for the signature scheme and R is a secret key. From this construction we see that the key size is essentially determined by the size of the P H F key. To sign a message M we first run the h value algorithm on the message M to get h k M and then we construct the AM as this form. Note that R is a chapter of A so we also have a chapter of AM. Then we can run the sample ID algorithm on AM and u to sample a short vector e and return this vector as a signature. To verify a signature we also run this h value algorithm and get h k M and then we form AM. And then we check whether the norm of e satisfies this condition and AM times e equals to u. This is a verification algorithm. We see that the signature is a single vector and the verification key if we apply the second a chapter of our previous construction then the verification key is actually a log n. The correctness for the fact that if A is a chapter matrix so is the AM is also a chapter matrix. And for the secret proof we actually remember that for our construct the U-chapter model we have that h k M is actually AR plus SB for some small R. And for the sunny queries we have that this S is invertible so we can generate signatures. And for the forward message this S is actually zero. Therefore we can solve the e homogeneous problem by the chip-down. Next is application to add a bit encryption. IBS game will consider a false algorithm. So firstly a setup, this algorithm also first run the chip-down gene to generate a matrix A together with a chip-down R. And then we run the h gene algorithm of the PHF function to generate a key. And then we also generate a matrix U from this set. And then we turn the MTK U as a master public key and R is a master secret key. Also the master public key size is determined by the size of the card to extract a secret key for an entity with ID. We first run the h value algorithm to generate h k ID and then we form the ID as before as a signature scheme. And we can sample the ID with this R and the ID and return h k ID equals to the ID. And to encrypt a message M, we first run to the S and sample X0 and X1 from these two distributions. And they use this chip-down algorithm where A is here, a B is a sum of chip-down matrix and get a key parameter and a TD. And then by this chip-a value algorithm on this input ID, we get R ID and S ID. Then the safe text C0 and C1 is formed by this form where C0 is a UT S plus X0 plus Q over two M. And C1 equals to AIDS plus X1 and R ID X1. And we have this C as a safe text. To decrypt a safe text, we simply compute a C0 minus the EID C1, which is a B for this vector. And each exponent, we compare the BI with Q over two. And then we can get to determine M is zero or M is one. And the correctness is the same as before. But the security need a high-mini entropy properties. We require that the key and key parameter is closed even conditional on this knowledge. And we also require that Rxv is closed to a uniform, if v is selected uniformly. This guarantees a programmability property that still holds even if the term, this term might leak some information of the TD. Okay, conclusion. We present the formulation of lattice-based THF and give a generic construction of a signature and an IB from lattice-based THF. And we also present a new signature and a fully secure IB with small key size using our type two THF. These people also give an improved signature scheme by combine different THFs. Okay, that's all. Thank you for your attention.