 OK, so thank you. So my talk is about constructing lattice-based signatures without trapdoors. So I don't know. I mean, you can construct trapdoors, but you can do things without them as well. So signature schemes, I would say that they're probably one of the most important public key primitives. So if we want to get lattice-based crypto to be sort of used at some point, we should really get these to be right. Encryption schemes, which is, I guess, another really important public key primitive, I think we already have it from lattice. So entrue from whatever, 15 years ago, maybe even more, is really quite good. And now we have some foundations for its hardness. But signatures, I still think there's a lot of work to be done because we can't quite get them to be short enough and competitive. So this work, I think, gets us quite close to what we need. I mean, we're not quite there yet, but I think we're getting there. So there are two ways to construct signature schemes. This one is sort of the hash and sign way. So that's what you saw using trapdoors from when Daniela gave the talk. And the other one is the fiatramere transformation, where you convert it from an identification scheme. So the nice thing about the conversion from an identification scheme is that no trapdoor is needed. And perhaps not having a trapdoor could save us something, maybe not asymptotically, but in practice. So a lot of signature schemes, there's been some work done. So I guess the first one was by Gentry Pyker, by Kunit Tanathan, really like a breakthrough work in 2008, using a trapdoor of iTyre. Then there were these other works, with the most efficient being the recent work by Michancho and Pyker, just two talks ago. Fiatramere, much less work. I guess it's not as interesting, maybe, but not as theoretically. So I did two things. But yeah, so as of now, I'm not going to really put up what the comparisons are. But as of now, the fiatramere theoretically is not as efficient as this in practice. But in practice, sorry, as this theoretically, but in practice, I think it's more efficient. And so the point of this work is to get this to be even more efficient, and almost implementable. Actually, implementable. All right, so all of lattice-based encryption is really based on knapsack problem. And I think the reason we call it lattice-based encryption is because knapsack sounds like from the 80s, but lattice is the more modern term that we use. So the knapsack problem is the following. You're given this matrix A, and it's random in ZQ, so all the entries are random. Then you pick some vector S, and it has short coordinates from some distribution. That doesn't really matter for this talk. And then you multiply A by S to get T. So T is just A by times S mod Q. In this talk, I'm going to use orange to be elements with small coefficients and black to be everything random in ZQ, or big in ZQ. OK, so the problem is given A and T, find a small S prime, maybe not the same S, because there could be multiple, so that A S prime equals T mod Q. This is the knapsack problem. And the hardness of the knapsack problem, very sort of a basic thing, is that it depends on the size of S that you're asked to find. If the size of S, or the optimal, is the size of S, where it's basically unique. And then as you allow the coefficients of S to be bigger and bigger, the problem becomes easier. And on the other hand, if the coefficients of S are too small to start with, the problem again becomes easier and easier. So what you want to do in designing lattice-based constructions is to get the knapsack problem to be hard as far as close to this as possible. So the connection to lattice problems is really the connection from the knapsack to average case problems, which then have connections to worst case problems. But this is sort of not important anymore for this construction. So when S, the coefficients of S are quite big, meaning there's lots of collisions, so it's not a unique S so that A S equals T, it's called the small independent solution problem, and it has connections from worst case lattice problems. When S is small, on the other hand, this is the LWE, I put it in quotes because it's not quite, I mean, it's not exactly LWE. I think LWE is stronger than what we need here. And most constructions do not really need the full power of LWE. I think the one exception is the talk you saw previously. And the only one I can think of is signatures without random oracles. But everything else that's based on LWE, like encryption schemes, don't really need the full power of LWE because you don't need to have a lot of samples. So this is why I think the knapsack problem is the more appropriate thing to say at this point, that we're basing our hardness on. OK, so the results of this talk are the following. So I'll give a construction based on SIS that's over here, and then I'll give a construction based on LWE that's kind of over here, based on how we know how to solve knapsack problems. So this one is, I don't know, if you think that LWE is not as hard as SIS, then you would say, well, this is more interesting. If you really believe in this curve, then this is the best cryptanalytic work that's known, then you will say, OK, I don't care what the problems are called, but this is higher up. So these are the two results of this work. And the results sort of obviously extend to rings, rings SIS and ring LWE. So here is the signature based on SIS, and then a slight modification of it will be based on LWE. So the secret key is going to be a matrix S, and the public key is going to be a random matrix A, and the matrix T equals AS mod Q. So that's it. Quite simple. And the signature is going to follow the Theachemere framework. So if you're kind of familiar with the Schnorr signatures, just think of them here and try to kind of map the letters that correspond to what you know. So what you first do is pick a random Y. This is kind of your masking parameter. Then you compute this is your challenge, which is the hash of A, Y, mod Q, and your message. And then you compute your signature, which is S times C plus Y. So the hope here is this Y kind of covers up S times C enough that it doesn't reveal anything. And then the output is ZC. That's the signature. The verification kind of uses the fact that this vector multiplication has some homomorphic properties. So what it does is let's look at this condition first. It checks that H of AZ minus TC mod Q is C. So this is AZ minus TC is really AY. Look at that. And you have to check that Z is small. So this doesn't happen in number-theoretic constructions. You don't care about the size of the discrete log that you find, but here you really do care. So Z has to be small. So let's kind of see what we need for security of this type of construction. You want two properties. So let's look at this bottom one first. You kind of want that the signature is independent of the secret key. This is not a precise statement, but you don't want the signature to leak anything about the secret key. That's something you want to have. And then you want to have this thing. Given the public key, the secret key is not unique. This is something stronger than what you need to use in Fiat-Chemere, but philatis is, I think, you need this. Anyway, and here's the security reduction. You're given some A. I'm going to reduce from CIS. So you're given some A, and I want to solve CIS. So I want to find this small vector S so that AS equals 0 mod Q, or AS equals T mod Q, whatever. AS equals 0 or T, it doesn't really matter. The target could be anything. So pick a random S, and I'm going to send AAS to the adversary. So that's going to be the public key. Then the adversary is going to challenge me with messages, but I can sign completely validly because I know S. And the signature hopefully has this property that the adversary does not know what secret key I know. And then using the forking lemma, I'm going to do some forking lemma magic, and end up with this solution, A times Z minus Z prime plus SC prime minus SC equals 0. So this is really the solution to CIS. So before, I showed CIS this had to be T, but 0 is OK. Anyway, so now the point is that all of these guys are orange, meaning they're small. And so this is a valid solution to CIS, except if these guys, if this somehow is 0. So this is why it's important for the adversary to not really know S, because if he doesn't know S, then he cannot force this to be 0 for all the S's. So here's the security reduction idea, is that at the end you kind of extract this from an adversary that breaks your scheme. This is a solution to CIS. And we want this to be independent of S. We want ZC to be independent of S so that this is not 0. And you want this thing to be small so that this is hard, because the larger you make this, the easier CIS is. So now how do you pick this random Y and how do you satisfy these two properties that we want? So you could make Y to be uniformly random mod Q, and that would really hide SC completely. The problem is that then Z will be too big, and the solution to CIS is not going to be interesting. It's actually going to be easy to break the signature. The other option is, well, let's make Y small. But then if you make Y too small, it may be leaking something about SC when you add SC plus Y, because you're not really modding by Q anymore. You're just kind of adding Y to this vector SC. So now Z will not be independent of S. So this is a problem. So what you do is rejection sampling. Make Y small, but only output ZC if Z meets a certain criteria. We're going to force a distribution on Z that's independent of SC. So here's the rejection sampling that was from 2009. So this was from my paper in 2009. So let's pretend the range of coefficients of SC is, of every coefficient, is pretty small. So I'm going to drown out this range by picking Y to be from a slightly bigger range. But now Y plus SC, if SC is here, then Y plus SC would be somewhere here. This would be distribution. Or if SC is here, then Y plus SC would be over here. And if I ever output Z that's over here, the adversary will know. It's like, oh, there's something SC was actually on this side, so I know something about S. So what you can do, the simple solution is, well, you only output things that are possible for any value of SC. And this seems to work. So this actually hides S if you output C and Z only if it's here. So the question is now, what's the probability that you actually output the signature? Because you might have to do rejection sampling if they start all over. Well, let's say the probability is P, that Z is in this range. And you want P to the M because Z is an M dimensional vector to be constant because you don't want to reject too many times because you want the signature to finish. So you need P to be about 1 minus 1 over M. So don't worry about this. So the point is the coefficients of SC must be M times smaller than coefficients of Y. So the noise that you drown out as C with must be about M times bigger. So what you get is the following. Well, the coefficients of SC are small, order 1. The coefficients of Y have to be M times bigger, so order M. And so the norm of Z, which is about the norm of Y, is order M to the 1.5. That's the norm of the vector when all its coefficients have size M. OK, so the question is, can we do better? Because the smaller that Z is, the harder the problem is. And in this work, yes, you can get Z to be order M. And the idea, which I'm not really going to explain too much, is different rejection sampling. I mean, I tried to draw a picture, but then the result looks wrong. So I'm just going to kind of state it fast. So the previous rejection sampling constructed a uniform distribution in a box, or a ball, it doesn't matter. But this new rejection sampling constructs a normal distribution, so a Gaussian distribution. And I just want to point out for people who kind of read lattice literature that this Gaussian seems to be quite different than the Gaussian that's used for lattices. So I don't think it has anything to do with why you need to use Gaussians in general for lattices and here. It just seems to be, I don't know, it just seems to be useful everywhere, Gauss, normal distribution. So anyway, the m-dimensional normal distribution is something like this. So whatever this is, it doesn't matter. And the discrete normal distribution is the probability of getting an x condition that it's an integer. So again, I mean, you can kind of ignore these two slides, they're just technical. So the idea is the following. The different rejection sampling, instead of picking a random y that's kind of small, you pick a random y from some normal distribution. Then you know that fc plus y has that same normal distribution, but shifted by sc. And now, sort of rejection sampling is you output zc with some probability. This is rejection sampling. If your target distribution is this, then you output your sample of probability, this, divided by the distribution you have, scaled by some constant. And hopefully, this constant is not too big, because the probability that you reject will be basically 1 minus 1 over this constant. So do you want this constant to be small? And this to work? Honestly, I don't have intuition for why Gaussians here work better. But when you work out the math, they do. So I don't know. Anyway, so what you need to do is you pick the standard deviation to be squared of m. And so you get z to be order m. So now it's better than before, instead of being m, it's 1.5. So this saves a little bit. So this saves a lot theoretically. It saves a factor of root m. In practice, it doesn't save too much, because you have Gaussians everywhere, you have constants. But I think what the bigger savings is in practice, if you care about practical signatures, is switching from cis to LWE, or switching from high density knapsacks to low density knapsacks. And so this is the slide from before, what I needed for the security reduction requirements. I remember I wanted the signature to be independent of the secret key. And given the public key, the secret key is not unique. But what if we replace this requirement with, given the public key, it's only computationally indistinguishable whether the secret key is unique? So this is kind of the, now the secret key could be unique, but nobody can tell whether it is. So this is the other knapsack case. So now the security intuition is following. So we have this hybrid game before the previous security reduction, where you're given, this is your real signature scheme, but you change it to a different signature scheme, you change the S. So you increase the value of S, and you give the public key as AT equals AS mod Q. So this is a completely invalid public key. But it looks fine to the adversary. So all you have to hope for now that with this invalid public key and an invalid secret key, you can still sign. So you cannot sign, actually. Even if you know this invalid secret key, you cannot sign. So you have to program the random oracle. But with program and the random oracle, then you can actually trick the adversary into thinking that you're signing correctly. And here the secret key is not unique, because you raise the value of S. So now the way you do signatures is you can't do this anymore. But you know this distribution. You know this distribution is independent of the secret key. So you don't really need the secret key. So what you do is you pick the C according to this, pick the Z according to this, so pick the joint distribution correctly, so that it's equivalent. And then to make sure to do the rejecting correctly, with some probability you actually output ZC, and you program the random oracle to output correctly. So this is your simulation of signatures. And the security reduction is now exactly the same. Given an A, you pick the random invalid S, and then now you sign by programming the random oracle. And then at the end, the adversary either says, ah, you have an invalid secret key, in which case we solve the knapsack problem. Or he actually gives us a solution to SIS. So remember, this is the hardness of the knapsack problem from a few slides before. And now here is the signature hardness, just basic idea. Based on SIS, remember this part was the SIS. Hardness of finding the secret key was quite difficult. But the hardness of forging signatures was easier, because we were adding some, we were the size of the signatures, the length of the vector of the signatures was bigger than the secret key. It was bigger by a factor of root n. This is what we showed before, based on rejection sampling, you need this gap of root n. The signature must be bigger than the secret key in order to hide the secret key. So now what's the hardness of the signature? Well, it's here. I mean, I don't care that you can't find the secret key, you can still forge. Based on LWE, you can take smaller keys, which are over here. And now the hardness of finding a smaller key is here. But now you need this gap of root n for the length of the signature vector. Well, it's here. So it's kind of on the same line, based on what we know about breaking knapsacks. So really the hardness of the signature is over here. And there's really a reduction from this guy to this guy. So it's really based on this problem. So this is why I think, based on LWE, you have to really, it makes sense to base signatures on these knapsacks that are low density, or LWE. And so just some basic parameters, just to show that this is really important, the LWE thing. So let's say you base it on CIS. Well, to get about 100-bit security based on the cryptanalysis work of Gamma and Guyenne, and then Chen and Guyenne, the secret key is about 12,000 bits, and public key is about the same. And the signature size is about 140,000 bits, using rings. Now, if you're willing to switch to LWE, the secret key size kind of goes down by a lot, about 2,000 bits. Public key stays the same. And now the signature really decreases to about 17,000 bits. So once you're here, maybe you don't need to do so much theoretical work. Now you can really kind of tinker with the parameters, not with the parameters, but maybe do some things like hacks to try to really lower things. So this is a work that we did with Tim Gunesio and Thomas Popelman. And so we got it down to 9,000 bits, and this was implemented on FPGA cards, and it's more efficient than a lot of the public key signatures. The signature size is still big, 9,000, but I think you can get it down to, I mean, with more tinkering, maybe even less, maybe sixers. I think once it's 5,000, you're quite good. I mean, it's good enough for practice. So that's it. Thanks. We have a little time for questions. Well, that's an interesting question, because, for example, for number of theoretics, I mean, so I guess, OK, if you didn't hear the question, can you remove random oracles and still get efficiency? I think for lattices, this would be a really interesting problem, because in number of theoretic constructions, you can almost get the same efficiency, maybe twice as inefficient. For lattices, it looks really, really difficult. I don't think there's been any work that even came close to getting parameters that are even this size. So the signatures of Boyan, I think these are the most efficient ones. I think they're much, much worse. So at this point, you really have to say, do you really care about, or try to get something with random oracles, or do you really care that a random oracle is in there? But this is a good question, to get something without random oracles, that's almost it. There's somebody trying to ask a question. His mic's not working. Hello? Oh, OK, right. So obviously with your LWE, you get this nice equivalence in hardness level. Do you have any intuition or any idea if you can get it closer to the peak, or get it really at that peak, any sort of idea how you could do that? Sorry, what was it? Closer to what? So the difficulty, if you could try and get, so with your LWE, you're somewhere close to the peak, but is there any way you could get really at the peak? Do you see any way possible of doing that? I mean, it's all about narrowing this gap that you need between the size of the secret key and the signature. So before the gap was M, now the gap is root M. I mean, it was surprising to me that you could get a gap of less than M, but I don't know. And this is really what the question is. And theoretically, this now matches the trapdoor construction. So there seems to be something that's sort of stopping both of them from getting closer to the peak. Of course, I mean, I can't say that it's impossible. But that's the right question to ask. Can you get closer to the peak? So let's thank Vadima again.