 Hi, everybody. This talk is about wing-based identity-based encryption, asymptotically short-run MPK, and tighter security. The others are Parahata Pla, Phonghao Liu, Han Wang, and Jedong Wang. I'm Parahata Pla. The identity-based encryption is generalization of the public key encryption that the encrypted message under the receiver's name or identity. The concept is first introduced by Shamir. And the first construction is given by Bonnet and Franckley. The security is similar to the public encryption scheme, that in the IBE, the adversary allowed to query polynomially many identities. And they get the secret case for the identities and the challenge for two messages under the challenge identity, which is not created before. And receiving the challenge expertise, and they guess which message is encrypted under the challenge identity. This is a adaptive security notion. The construction under the standard model. In the classic assumptions, there are many constructions. Among them, the water's construction is very simple and efficient. They use the dual system encryption, but we don't know the system from LWE, or post-condom IBE, as compact as the selective IBEs. But there's many tries to construct the led-dispaced encryption, led-dispaced IBE schemes, that's as compact as the selective secure one. The IBE, that was the first construction, is given by IBE-V10, which is adaptive security. And their master public key needs lambda matrices. The current is the best construction is given by Yamada. And the construction needs omega log lambda matrices in the public became it. The Yamada's construction is the brand is your, and that's the design is implicit. The natural question is, can we construct a better led-dispaced IBE if it's more compact and picky, and without using the brand question. And of course, that's explicitly designed them. For the security for adaptive IBE schemes, there's two techniques to prove the adaptive security of IBEs. The first is the artificial labor technique. In this technique, the reduction will increase running time by one over epsilon squared. With all the artificial technique, the reduction can be removed running time blow up. But the advantage is we lose another epsilon. So another natural question is, can we do better? For these two questions, in this paper, we give a better ring-based identity base. IBE is more compact and better security analysis for both with and without artificial labor. That's your first IBE design. Let's recap the IBE's construction. And their construction, the master public key is consist of L plus 1 matrixes, A and B1 to BL. The key structure algorithm on inputs ID and a master. Secret key, first compute the FID, like this. And using the sample left, using the TA, the trapezoid of A, to sample a Gaussian distribution vector, and let it SKID. To simulate the K-gen algorithm, the master secret key, the simulator first replaces BI's with A times RI plus HI times G, a GSW type encoding, and compute FID like this. From this, we can see that if we let the second term by a function of ID indexed by L numbers HI, then the computation of FID is a homomorphic evaluation of this function. Given the GSW encoding of this one, encoding of the hash function, the index of the hash function, and compute this hash function. And another require from this hash function is that we want this hash function partitioned identity space. That means each ID star for the challenge identity is equals to 0. And for other identities, the hash function, the hash of that ID is not 0. To satisfy these two requirements, Yamada designed the hash function like this, A times ID plus B mod A row. A, B rows are omega log lambda bits. And this shows that this hash function is a partition function. Besides, but the homomorphic evaluation of this function needs very much easier. So now we show that improve this hash function. We design a hash function, a new hash function. That's only need omega log 1 ring vectors because the Yamada scheme needs omega log lambda bits and encode H bits. So the public gains in the Yamada scheme needs omega log lambda ring vectors or matrix. So improve them. Let's see our design. First of all, let's see a result from AFL. They say that if a hash function satisfies two probability properties, the first is hash of x is equals to 0. This probability is 1 over A. And the condition probability for two x1, different x1, x2, the conditional probability is smaller than 1 over B. Where A is larger than B and B is larger than Q. Then there is a partitioning. This hash function will partition the x. And the x is in the Q with noticeable probability. That is a different definition of partitioning. So our goal is to construct a hash function with above properties. That means A, B is large enough to larger than any polynomial Q. Let's see a basic design. For the basic design, it's a hash of A z, the input z, which might be an identity based encryption. It's a ECC of z, which is a vector. And we randomly choose an alpha as place. And minus a beta I random element in zp. So it's easy to see that for any z, the probability of hz equals to 0 is equals to 1 over p. And the condition probability for any different z1, z2 is smaller than 1 minus epsilon, where epsilon is the relative distance of the ECC, the error correcting code. It's obvious that this hash function is not satisfies the requirements of lemma 1 because p is very small. It's not larger than any polynomial. So in 2, 2 construction is that partly repeating the basic hash for t times. So repeating the t times, we have a hash function, repetition of hash function, which is the edge of the repetition of hash function of z is equal to 0, the probability is equals to 1 over p to t. And the condition of probability also as smaller than 1 minus epsilon to the t. Because if we set the t to a large enough, so we can, these two probabilities is x y for the lemma 1. But the vector we designed for the hash function, output of hash function is not compatible with the rings or the design of IP. So we embed this repetition function into a ring vector like this. We embed the vector into the exponent of x. So we just say 1 to 1. And we have the probabilities are straight. So this embedding is we embed a vector into a 1 ring element with small coefficients, which is invertible by the design. And this kind of design satisfies the requirements for a lemma 1. This is p times 2 is large enough to larger than any polynomial q. If we set the p is almost equals to n squared, and t equals to omega 1. So our second goal is to homomorphically evaluate this hash function. To evaluate this hash function, we will write the final hash function like this. We write the second term to here. And we write the first term like this. If j is equals to alpha i, then this j becomes alpha i. And this is equals to this one. If j is not equals to alpha i, then this term is equals to 0. And the whole term is equal. And the sum is not relevant to the previous one. So now we want to homomorphically evaluate this hash function. If you are given the encoding of the index of the hash function, alpha and beta, we want to homomorphically evaluate this one. And the z is insecure. So first of all, if we're given the encoding of beta i like this, then we can homomorphically evaluate the first term by the addition property of the GSW type encoding. And then we show that if we're given the encoding of this equality test, j equals to alpha i, because i and j is insecure, so this term is unclear. So we only need to homomorphically evaluate this term. If we're given the homomorphic evaluation of this term, then we can homomorphically evaluate whole hash function. So our goal is to give us the encoding of alpha i, which is the indexing the hash function. And j, which is insecure. And we want to homomorphically evaluate whether it's a j equals to alpha i. We call this the homomorphically evaluated case. And for this goal, the previous approach used a bitwise comparison. They decompose alpha as a bitwise alpha 1 to alpha t. And bitwise compare with j like this. So this approach needs omega log alpha for encoding of each bit of the alpha, for each bit of alpha i, encoding this one. And homomorphically evaluate this one. And homomorphically compute the multiplication of jsw typing code. So this approach needs log n-ring vectors to encode alpha. So we think, can we do better? So we have an observation for the ring r, because it's x to the n plus 1. If f is equal to this polynomial, I want to v to the m to 1, m minus 1. If v equals to 1, then this m terms, so it's equals to m. If v not equals to 1, then the numerator is equals to 0. And you know that it's not 0, so the whole term is equals to 0. If we evaluate the function at x to the i, then if x equals to 0, which means x to the i equals to 1, so the function variable is equals to m. And if i not equals to i, then v not equals to 1, and the function variable is not equals to 0. So observing this one, if you are given the encoding of x to the alpha minus j, the encoding of jsw encoding of this one. So we can homomorphically evaluate this polynomial. And we got the encoding of this one. So multiplying a homomorphically multiplying m in reverse, so we get the homomorphic testing of j and alpha i. This testing only needs one encoding, only encoding of this one. We don't need a bitwise encoding of alpha. So our testing is like this. Given the encoding of alpha, which is in Zn, and j is clear. So we first compute x to the alpha minus 1, encoding of this one by encoding of x to the alpha times x to the minus j, because j is insecure. And then homomorphically evaluate the f function, and we got the encoding of this one. And then we compute m in reverse. Oh, that might be m in reverse. And we got this one. If j is equals to alpha, then this term equals to 0. And the x to the 0 is equals to 1. And the f function at the evaluation of f this one equals to m. And then computing by m to the inverse, which equals to the encoding of 1. Otherwise, it was encoding of 0. So we perfectly done the homomorphic evaluation. So now we've finished the homomorphic evaluation of our hash function and the design of our hash function. So by the way, we see the application of this type of equality testing. Given the encoding of x to the alpha, we went to unpack. That must be unpack for alpha. And we got every encoding of alpha by homomorphic evaluating b and j, where j is the i speed of j is 1. So in this type, if the i speed of alpha is 1, then there's 1 j is equals to alpha. And we got this homomorphic evaluation equals to 1. And the whole addition will get the encoding of 1. If the encoding is with 0, then each evaluation of this one all gets a encoding of 0. And the sum will get the encoding of 0. And we get a packing. Given the encoding of every alpha i, we can pack them to get the encoding of alpha like this. A straight application of this encoding is that we can improve the MPK size of the IPK in Yamada for by a factor of log lambda, straightly. So this is another application. So let's see our construction. The construction is very similar to the previous constructions. Trapgen constructions, the a and the ta, the trapezoid of a. And then generate the bi, the t and the d, where say eta is some kind of stand. And the t vectors like this and msk is the ta, the k generation. And the homomorphically evaluate the hash function. This is a circuit of hash function, circuitly. And then using sample left, ta is the trapezoid. And it get a r, which is a lattice, as I call a centrifugal vector on the lattice. And the r is the secret k. The encryption is the dual-regular encryption. That is bi, that computes the FID, impression. And then computes the bi, bid, using a public evaluation of FID. Which is similar to the previous k generation algorithm. And then I use the dual encryption to kind of encrypt the message mu. A decryption is similar to the dual-regular encryption. So that is our design. Next we see our, so our, our, there's a t is, we got the t is equals to omega one. So our design only needs omega one ring vectors. So let's next see our better security analysis. Here we recall a big security framework of mk sensor and the water. Where, where we allow the adversary all it would say a boat. You can abort the mess, the whole game, which is different from the previous, previous security analysis. And if that alpha is a non-aborting probability and beta is a conditional success probability, then the advantage of the adversary is epsilon equals to alpha times two, beta minus one square. There is a, a, as you're given, that if, which is very intuitively correct in the statistical distance, that if we replace a distribution p with a q, then the output, the non-abort probability is not affected by p. And the success probability are also very close to the previous beta under the distribution p. This is very important in our security analysis. And let's see the security analysis. First thing, game one is original game and the probability is alpha zero and alpha one. The game one, in the game one, the challenger choose a random partition function h by a random indexing. With the challenger may use as artificial technique or without artificial abort technique. With the lambda file is the estimation of gamma. The gamma is a partitioning probability. Let's just, let's keep this analysis for a moment. For game one to game, game one to game three, game two, we use a left or a hash lemma because we replace the BIJ with the JSW encoding and which is statistically close. Game two to game three, we use a sample left to sample right because we can compute RID by a trap eval because we have RIGs and the hash function and R5 eta. The challenger knows all these steps so you can trap eval this one and get RID and compute the SKD using sample left using the RID which is the, this is a secret idea for the matrix. A trap eval matrix. And we know that then the simulator can simulate the kj without the MSK. Game four is using the real random algorithm by the hash function because the hash function, hash variable of HID star is equals to zero and the randomized properties of the real range we can get that to game three and the game two is statistically close. And again, because we replace the C1 with the random element by the randomness of real library, this game four and game three is close. So we got game one to game four is close. So let's come back to game zero to game one. That gamma is equals to the partitioning probability. We have a lemma. We say that alpha one, which is not a broad probability at game one is larger than gamma mean, which is a lower bond of the estimation of gamma times a alpha zero and the beta one is larger than the rate of gamma mean over a gamma max, upper bond of estimation of gamma times the beta one. So if that the gamma mean over gamma max is equals to one over one minus delta zero over four, then epsilon one is equal to two, is larger than epsilon zero times gamma mean over four. Where gamma zero is equals to two times beta zero minus one. So we get this one. And if the challenger uses artificial technique and it said gamma mean, the lower bond of the estimation of gamma equals to one over two q and gamma max is one over q. So the rate is one over two. If we want to increase this rate to this one, we need one over delta square delta zero square samples by kernel of bond and which is smaller than one over the epsilon zero. And the previous reductions need one over epsilon zero square. So over reduction, if we're using this artificial approach technique, over reduction is better than previous reductions. If the challenger uses result artificial technique and said the gamma mean over gamma max is the delta zero over four q, then the rate is exactly what we wanted. But the reduction will lose delta zero. Because the previous epsilon one is larger than the gamma mean times epsilon zero. But the previous, competing with the previous reductions, the factor is epsilon zero and over factor is epsilon zero squared. So over reduction is better. So in this paper, we gave a better IBS scheme with better mass public case and explicitly design and a better reduction. So thank you. Thank you for time.