 So the last talk of this section is partitioning with nonlinear polynomial functions, more compact IBEs from ideal lattices and bilinear maps. The speaker is Shuichi Katsumata. Thank you for the introduction. So my name is Shuichi Katsumata from the University of Tokyo, and this is a joint work with Shota Yamada from ICE. Wait, okay, this is not working soon. So yeah, apparently I discovered this two days ago. So I'm actually as old as Asia Crypt. So we were both born in 1991 in Japan and in Tokyo. So okay, it's not that important right now, but I guess that's why I'm kind of feeling comfortable right now doing this presentation. Okay, so for the background, what we did is we created adaptively secure IBE schemes. And we did that in the lattice and the bilinear map setting. So the current situation for lattices is that, well, we have adaptively secure lattice based IBEs, but they all require longer public parameters when they are compared with the selectively secure ones. So that's not that good. And for the bilinear map setting, well, we have the great works of the dual system encryption type of methodologies. But the thing is, if we want to base our security under search problems, then we need to require long public parameters. So the topic of this talk is can we achieve more compact IBEs? So our results are, well, mainly two, but at a high level we use the same technique. So we both base our security on the partitioning technique where we use a nonlinear function to partition the identity spaces. And for the IBE schemes, well, we use this ideal lattices. And we improve upon the currently best known scheme of Yamada 16 of Eurocrypt, where they needed a super polynomial modulus ring LWE assumption. Ours only require a polynomial modulus ring LWE assumption. And during our construction we use the commutativity of the ring in a very essential way. So we actually don't know how to instantiate our scheme in a normal lattice right now. And for the IBE scheme from bilinear maps, what we did is that, well, we create the first scheme with sublinear size master public key from search problems rather than the decisional problems. And the interesting technique we used is that, well, we base our technique on the bonoboin technique during construction rather than, well, in the security proof, which is usually used. Okay. So the agenda is like this. So we have the preliminaries and so for those who do not understand lattice that well or are not familiar with it, it's okay because we have the bilinear map setting and vice versa. Okay. So before anything, I want to explain what the adaptive security for IBE is. And I guess everybody's been talking about this in this session. So I'll just go over it really fast. So what happens is that the challenger, I mean the adversary can query for any type of secret keys he wants. And at some point he adaptively queries some ID star, which is going to be the challenge ciphertext. And he's going to receive the challenge ciphertext. And under the condition that this ID star is not going to equal any of the IDs he's queried. So he could keep on making the secret key queries again and this is going to be the adaptive security game. Okay. Let's go for the lattice section. So before anything, I want to explain the template construction for lattice IBEs. There's basically only one line of thought when creating lattice based IBEs and well for the master public, we have a matrix A and a vector U. And we have some auxiliary information here, which I will define later. And for key generation what we do is that we're going to hash this ID to a particular unique lattice for this ID. So we're going to create this lattice A slash hash of ID and we're going to sample a short vector E such that this equation holds. And we can do this by using the master secret key, which is the trap door for this matrix A. And without the knowledge of that secret trap door, we can't sample this. So this is going to be our secret key, the short vector E. And for encryption, what we have is that we're going to create two LWU instances. For the encryption, what we have is that, well, we have this U term here. So we have this public LWU instances, U transpose S plus some noise plus some messages. And for the challenge, the other part where we kind of bind this information of the ID, we have this ID unique to this, this lattice unique to this ID. And we're going to have LWU instance like that. For the decryption, we're not going to have a slide for that, but it's very simple because if you have this vector E, what we can do is that we can take the inner product of this vector and since this equation holds, we can create this U transpose S term. So if we subtract each other, we're going to have some message times Q over two with some small noise. So if this whole thing is small, then M is going to be zero and otherwise it's one. So that's how decryption works. And for the template for security proof is that, well, we basically only use the partitioning technique for lattices. Well, this is the only technique we have so far. So what we do is that we embed the problem instance into the public parameters so that the hash of ID is going to be in this form. Where hash of ID, this is publicly computable by everybody. So during encryption, everybody can compute this. But this great term, this is only known to the simulator. So this is going to be implicitly set as this during simulation. And the simulator is only going to know that it's in this actual form. And this RID is this matrix called the simulator's trapdoor. And we need this to be small for the simulation to follow through. And we have this black matrix G here. It's called the gadget matrix. And I'm not going to define what this is, but it's just a special type of matrix that allows you to do, well, special operations. And this F of ID, this is the implicit partitioning function. So during simulation, what we hope for is that for the secret keys, for the secret key queries, we want this to happen. We want FID not to be 0. And when that happens, this gadget matrix can operate. So using the simulator's trapdoor and the gadget matrix, we can sample a short vector as in the real world. However, for the challenge ciphertext, we want this F of ID star to be 0. Because when that happens, then we're going to lose this information on the gadget matrix. And without the power of the gadget matrix, well, the simulator's trapdoor is kind of useless. But on the other hand, what we can do is that we can actually, well, embed the LWE instance into the challenge ciphertext now. So we want this kind of partitioning to happen during simulation. So the important part is that, well, how are we going to define this hash of ID? So what people have done, so the very seminal work of ABB10 and Bohr10, what they did was that they had the master public key like this, where kappa is the ID length. So now they have kappa plus 1 number of matrices here. And they define the hash of ID as some linear function of these matrices. So for an example, if the ID was 010011, where the ID length kappa is 6, then this hash of ID is going to be computed by taking this B2, B5, B6. So this S of ID is going to, well, kind of be an indicator set for where the number 1 is standing. And this works out perfectly. And what happens during simulation is that, well, we're going to set BI as ARI plus YIG, where Y is some random number that we're going to define later. And this is going to, since, well, using a left over hash type of argument, this is going to be uniformly distributed as this matrix here. And during hash of ID, what happens is that, well, we can compute this, everybody can compute this, but only the simulator that knows this R and Y knows that it's going to be in this form. So simulation works. But the thing is, well, since this was a linear function here, well, this has to be long, the master public key size, because now we have to have the matrices linear in the ID length. So what Yamada-16 did for Eurocrypt was that, well, this is currently the most asymptotically compact lattice-based IB. So what they did was that, well, they kind of redefined the way we create these matrices. So they only use two square root of kappa elements of matrixes here, and they artificially created kappa matrices. So this is what they did. And now, since we only need to have two times square root of kappa matrices in the master public key, well, now this is shorter. And for the hash of ID, what we do is that we're going to compute some function like this, where G inverse, I'm not going to, well, it's a little bit complicating, but G inverse is a sort of function that allows you to do matrix multiplication, even though, well, usually these are not well-defined dimension-wise. So G inverse, just think of it as a special matrix operation. So this is great, because now we can set the matrix as this, and then h of ID is going to be this value here, where f of ID, the partitioning function, is nonlinear. It's actually second degree polynomial now. And since this is a second degree polynomial, well, the number of matrices in the master public key is going to be sublinear now. However, the downside was that for the security to work, the modulus size Q had to be super polynomial. So our work here is that we, well, improved upon this, and we made this polynomial. So let us kind of walk you through what we did. So taking a closer look at Yamada 16, what happened was that during simulation, they set the b0s and bi's, these public matrices as this, where y and yi are some randomly chosen elements. And the hash of ID was this value, where our ID was, it's a little bit dirty, but it's this form right now. And several conditions on r's and y's need to hold for the security proof to will follow through. And looking at this a little bit more carefully, what happens is that, well, the main obstacle was that, well, the partitioning function f of ID is in this form. And for this f of ID to make a nice partition over the ID space, what we need is that, well, we need these y's to actually grow proportionally with a number of secret key queries, which I do know by large Q here. However, at the same time, as I said, this r ID is the simulator's small trapdoor. And what I mean small, well, as a matrix, when each entry is small, then it roughly means that this matrix is small. So looking at that, well, for each entry to be small, this y has to be small, too. So it means that, well, we're going to have to have this y to be small, and it has to be small compared to the modulus size Q. So, well, putting everything together, what that means is that, well, now this y has to grow proportionally with large Q, which is the number of secret key queries, which could be any number of polynomial value. So y has to be bigger than this. But at the same time, as I said, this y has to be small compared with the modulus size Q. So it kind of seems that this Q has to be super polynomial for this whole simulation to work. But taking a closer look at this, it's actually, we don't need a scalar there. So I'm going to explain an initial idea that doesn't quite work, but this is the main idea that we use. So I said that y has to be bigger than large Q here, but to be a little bit more precise, what I actually mean is that y has to be chosen from a distribution whose entropy is, well, is as large as this large Q. Well, that grows proportionally with the number of secret key queries. So when y was a scalar value, since we had to pack this large Q in one element, this had to be big. So that means we had to have a big modulus size Q, but let's consider a setting where y is not a scalar, but actually a matrix. Then this matrix has n squared entries. So even if each entry is small, like zero and one, we have two to the n squared possibilities of embedding information in this matrix now. So we can pack any polynomial number of secret key queries inside these entries now. So we can keep each entry of y to be small, and that leads to a small modulus Q. And this kind of seems to work perfectly, but the idea that y doesn't work is that now we're going to lose commutativity of the homomorphic hash property. So let's kind of look at this. So we have these two matrix right now, and during computing the hash of id, we implicitly use this homomorphic operations. So we're going to have b times b prime, where g inverse is that special matrix operation symbol. So what happens is that, well, b times g inverse b prime is we're just expanding this. We have something like this. And now a r times g inverse b prime here, and g and g inverse are going to cancel out. So we're going to have y times b prime here. And this is not good, because as I said, for the simulation to work, this hash of id has to be in this form where it's this a times something plus fid times g. So it has to be in this green or blue form. But as you can see, this green part, it's OK, because it's a times something, so it's good. We want this. And for the blue part, it's also good, too, because we have y, y prime times g. However, since this is not y, a is not going to equal a, y, because, well, in general, we don't have commutativity of these elements. Well, this is bad, because it's not in this green form. So it doesn't quite work. So the idea that turns out to work is that we're going to use the polynomial ring. So for those who don't know what the polynomial ring is, it's very simple. This ring is an isomorphic to this vector space, zqn. So a vector can be represented as one element in rq, in this polynomial ring rq. So what we can do now is that, well, this matrix b, well, we have this equation in the previous slide. So b equals ar plus yg, where y was a scalar. What we can do now is that, well, since, well, a vector is one element, this matrix is going to be a vector, and we can convert this, well, and make it into this ring form. And now, y does not have to be a scalar anymore. It could actually be an element in this ring rq. So the reason it works well is because now we have commutativity of a and y for free, because we're working in a ring. And in this ring rq, y is just a scalar, well, when viewed as an element in rq. And since y can be viewed as a vector in zqn, we actually have n entries where we can pack this q inside. So this kind of allows us to make the same discussion as when y was a matrix. So even if y is a vector, we have n entries. So we have two to the n possibilities of embedding this q. So for any polynomial number of secret key queries, we can embed enough information in this one element y while keeping each entry small. So this allows us to use a very small polynomial size modulus. And this is basically our work, but some ignored problems are that, well, rq is no longer a field. So for those who know, when we have this equation for fyid is not equal 0, for the sample write algorithm to work during simulation, we actually need this to be an invertible element over this ring rq, which we are not going to have anymore. So to overcome this problem, we kind of use a special ring rq. And the other, well, crucial problem was that during Yamada 16, they used this so-called smudging technique to create the challenge ciphertext to make the simulation world indistinguishable from the actual world. And when people use the smudging technique, they will necessarily lead to a super polynomial modulus size q. Because when they have a really small Gaussian, the smudging technique, well, essentially means that they add a really big Gaussian error term to kind of hide this really small Gaussian errors. And if we do that naively, well, we have to use the super polynomial modulus q, but we kind of devise a technique to overcome this problem. OK. So finally, I'm going to explain what this bilinear map section is all about. So IV is from search problems on bilinear maps is the thing we wanted to do. And dual system encryption, this is a fascinating object. But sadly, well, it inherently requires decisional problems, like the SxDH or the DLIN problem. And the only known solutions that is secure under the hardness of search problems are based on water's IB or the bonnet buoy IB plus a hardcore function. And this is kind of good, because now, well, this is secure under the computational BDH assumption. So we actually have a scheme that can be proven hard under a search problem. And for the water's IB, we, well, furthermore, have short ciphertext, which is kind of cool. But the thing is, well, we have to have long public parameters to actually instantiate this IB scheme. So we kind of review what this is. So for the water's IB plus a hardcore bit function, what people do is that for the master public key, well, I guess most of you know what the water's IB scheme is, but I'm just going to walk you through what's happening right now. So for the GL, this is the Goldrich Levin Hardcore Bit function here. And these blue group elements will be used to define this hash of ID, which we'll define later. So during water's IB, what they did was that for the secret keys, well, they computed a term like this where r is some randomness here. And G alpha is the master secret key. So this is the secret key for the water's IB. For the ciphertext, what they did was that they kind of bind this ID into this ciphertext G to the H of ID. And they had EG, well, this term, times this message. But now we want to have this difficult under a search problem. We're going to use the Goldrich Levin Hardcore Bit function here and explore it with the message bit. So during decryption, what happens is that it's very simple. So as usual, we're going to compute this term from the secret keys and the ciphertext. And we're going to, well, put this inside this GL function and explore it and get the message bit back. And how did they define the hash of ID? Well, what they did was, well, this is similar to the first case of the lattice that I was explaining about. So hash of ID was defined by this linear function. Then that's why it lead to a long public key, because now the number of group elements is linear in the ID length, because this was a linear function. So a simple idea is to make this hash of ID nonlinear. And this is an initial step we took, but, well, again, this does not quite work. So we're going to see why it doesn't work. So this hash of ID, now it's a nonlinear function, so we can just have a 2 square root of kappa elements in the master public key. So we kind of reduce the master public key size. And it seems to be we're going in the right direction. But for the waters ID, what we have to do is that we have to come, everybody has to be able to compute this g to the hash of ID. And looking at this, well, we can't compute this part efficiently anymore, because it's nonlinear. And if we can actually do this, it's, well, basically solving the computational Diffie-Hellman problem, which we cannot do. So the idea is, how should we compute this publicly? So what we did was that we used the Bonhe-Bohem technique during construction rather than during simulation. So what the Bonhe-Bohem technique is, is that, well, instead of computing this very difficult term, they actually inject some randomness here. And they're going to compute something quite as good as this g to the w1 times w2 term. So at first glance, this seems really hard, because we still have this term. And it seems that we can't solve it. Well, we can't compute this. But let's take a mental experiment here. And let's say we change the variables. So I said this t is some random element. But we can view this as t tilde minus w1, where t tilde is a random element. And if we view this t as t tilde, what happens is that, well, the exponent here, this is w1 times w2 plus w2 times t. And we're going to substitute this t with t tilde minus w1. We kind of castle out this two degree term. And we're going to be ended up with w2 times t tilde. And are these linear in w1 and w2? Well, it's easy to see that it is, because now this t part is, well, linear in w1. And this w2 is, well, this term is linear in w2. So instead of computing this term here, what we can do is that we can compute something like this, where t tilde is now a random element chosen by the encryptor. So if we choose a t tilde randomly and compute something like this, then this is implicitly going to be a value in this form. So combining everything together, this is our resulting scheme for the Baleen-Neuromab-based IBE. So the master public in the hash of ID is the same as I explained in the previous slides. For the ciphertext, what we do is that since we can't compute this g to the h of id, what we have is that we inject some randomness here using the Bonoboin technique. And since we inject extra randomness, we're going to have this g to the t term here. And now since we changed this ciphertext part, we have to change the secret key part, too. And while looking at this, well, we have to compute this t times w2 term here. So we kind of add this w2 term here to kind of take the pairing and cancel this part out. So the sad part is that the secret key and the challenge of the ciphertext are longer now. It's longer by a square root of kappa degree of order. However, our main motivation of making the master public key is now achieved because this is 2 to the square root of kappa. So a simple comparison is that, well, now comparing with the water 05 with the hard core bit function, they had a linear term here, but now it's sublinear. However, the ciphertext and the secret key is sublinear, too, and not a constant term. And another small observation to make is that, well, the assumption we use is a little bit different. They use the CBDH assumption, but we use the CBDH exponent assumption. So in summary, what we did was that we created two new adaptively secure IBEs. And at a very high level, they were both based on the partitioning technique where we divided the ID space in a nonlinear function way. And for the IBE scheme, it's based on the ideal, well, it's built over the ideal lattices. And we use this polynomial modulus ring LWU assumption to prove hardness. And for the IBE from bilinear maps, well, yes, this is the first scheme with sublinear size master public key from search problems. Thank you for listening. Any questions or comments? It's a very simple question, just to get sure. So the fact that you're using some special types of rings, so this doesn't create any problem regarding secure parameter generation or stuff like that. Oh, OK. Thank you for the question, yeah. So I said, yeah, we use special rings, but it does have an average to worst case reduction to it. And we kind of have that in the appendix, which is, I guess it's the most long part of the whole paper. In term of tightness, does this partitioning technique buy you something to expect to the linear one? OK, that is a very good question. So when we kind of think about the reduction loss, this is quite bad, but it's kind of hard to say if we kind of buy something or lose something. Because for the LWU part, we don't know the concrete parameters which we should use. So I guess that's a really good, important question that we should kind of think of. It's an open problem, I guess. Any other questions or comments? No, let's thank our young and 25 years of Suichi. Thank you. So this is also the end of this section. Thank you.