 cryptography and why we use it, why we think it's, you know, how it's sort of got started, why we started using it. So there are these two main problems, like in the worst case. So these are, these problems that are sort of, you know, for any instance, this shortest independent vector problem. So you basically find a short vector in a lattice, find a bunch of short vectors in the lattice, and then there's this bounded distance decoding problem. So these are the two main, let's say, worst case problems in lattice cryptography. And sort of the nice results were this i-tie, he found this, he kind of defined this sys problem, which is small integer solution problem in 96, and he showed that if you can solve this average case problem, then you can actually solve SIVP. So in terms of, you know, efficiency and stuff, this wasn't really, you know, any good at this point, but it really got the ball rolling and, you know, kind of got, it got us to our current state, which is quite good. And in O5, Regiv, or did Regiv define the learning with errors problem and showed that it's as hard as the BDD problem in the worst case. So actually this reduction, you know, this reduction you can sort of teach in a class, you know, it takes one, you know, a few, I don't know, one hour, and it's quite a simple reduction, at least the ideas are simple, this one is actually much more complicated. And the interesting thing is that from this sys problem, you can get things like one-way functions, collision resistance hash functions, so all the stuff that's defined in mini-crypt. Whereas from learning with errors, this is what you really need for the more advanced stuff. For example, public encryption, FHE, identity-based encryption, and interestingly more, you know, digital signatures on both sides, actually more efficient digital signatures. So actually digital signatures, you don't even need sys, you can just, you know, use any, use any one-way function, right? So, but that will be even less efficient. So it's like less efficient is just one-way functions, more efficient is sys, and even more efficient is learning with errors. Okay, so I'm kind of mentioning digital signatures here because that's the topic of the talk. All right, so now what is the point of this? You know, why are these reductions interesting? It's because it kind of gives us some confidence that the design of sys and LWE is good, right? So because there are some parameters such that, you know, sys is not, you know, just a random problem, it's actually, if you can solve sys for some parameters, you can solve sivp. So we don't know exactly what parameters, so setting parameters for sys does not come from those reductions. But at least we know that, you know, there are some parameters that will work, and this is really what kind of keeps us working with sys and LWE because we know at least the design is good if we believe that sivp and bdd are hard, you know, in the worst case. But, you know, so there's a source of inefficiency for sys, actually, you know, so if you haven't seen sys before, it's basically, you're given a matrix and you try to find a vector so that the matrix times the vector is zero, and this vector should have small coefficients. So the way sys is used, it's sort of used as a hash function or something like that, so you hash sys and it's hard to find a second preimage, basically finding a second preimage is the same as finding a collision. So the problem with sys is that, you know, it has this random matrix which is part of, which defines the function, and it's big, and this was always sort of the problem with lattice cryptography for the first maybe 10, 15 years. Most lattice cryptography, not entrue, entrue kind, was good, is that there's a lot of storage requirements. So in terms of time, it was always fine, but these matrices were quite big. So you have to, you know, n by m, n is, you know, 1,000, m is 2,000, and now you have, now you're going to megabytes. Okay, so what people said is, okay, let's switch to polynomials. Right, so switching to polynomials, and I'll sort of, if you haven't seen this before, I'll explain on the next slide, is actually making this matrix not completely random, but actually structured. So for example, you know, I have, I make this first column, the next column is going to be a rotation, some sort of function of this column. So in this case, it's a rotation with a negation on top. If you know, this is actually multiplication, modulo, some ring, but, so we'll talk that bit. Okay, so now the nice thing about putting structure in this A is that, you know, now you need a lot less storage, and product can be computer faster as well. So the reason for this is that, you know, polynomial multiplication is basically matrix vector multiplication. So I'd like, right, so here's sort of an example, just a basic example. Let's say I have two polynomials, A and B, so just polynomials over the integers. Okay, so I can write that as, you know, I just expand the coefficients times B. I'm not saying anything, so if you haven't seen this, maybe it's new, but otherwise it's kind of true. Okay, so you write A times B, and then you can write it as, you know, A0 times B, A1 times Bx, A2 times Bx squared, etc. And you can actually write that as a matrix vector multiplication by putting your, you know, vector here, it's a row vector of the coefficients of A, and then making B, the coefficients of B go here, then Bx is just the coefficients of, you know, shifted, and then Bx squared is like this, Bx cubic like this, you put zeros here, and then you just, so it's a product of a vector by a matrix. Okay, so that's polynomial multiplication. Now the thing that we were, this is over Zx. Now, if we're working over Zx mod f of x, which is, we usually do when we work with polynomials, then actually this matrix doesn't, you know, doesn't have this form, but in fact it's just, you know, Bx mod f is the same length as B, right, because you just do Bx, and then you reduce mod f, right, so you always have the number of coefficients is the degree of f. Okay, so this is what, you know, y polynomial multiplication is the same in matrix vector multiplication, and of course, you know, this is for row vectors, but you can make it for column vectors, which is what I do later. Okay, so basically what we had before is the, this matrix vector multiplication is really polynomial multiplication and addition, right, so this part times this part is a multiplication, and then this part, and this part is another multiplication, and then you add it together. All right, so this is what we've been working with, and so you can define the ring sys problem, for example, so that problem is given K random polynomials in some ring, let's say we use x to the n plus 1 as our f of x, which is a popular ring to use, that doesn't matter for this talk. The, finds polynomials with small coefficients, so that the sum of the products is zero, right, so this is the ring sys problem, proven to be, you know, as hard, and it was proven, you know, has some hardness, right, so we showed that if you can solve sys with some, you know, over some ring, right, then you can find short vectors in any lattice that's an ideal of this ring, so the important thing to notice here is that this polynomial f kind of comes in, in both the problem and the sort of the hard problem that we're hoping is hard, right, so this is, this is sort of the important part, and LWE, for example, has the same, same thing, right, LWE is just sys, if you look at it the other way, so this is the usual LWE instance, you have the random matrix A times something plus something and it gives you this, and you can convert it to polynomial multiplication because you don't want to have this matrix be completely random, and then you can define the same ring LWE, LWE and ring LWE problems, so here's the decision of ring LWE, ring LWE problem is given a bunch of polynomials and AS plus EI, find S, right, so that's LWE, so it's the same thing, given A and AS plus EI, this is error, find S, okay, and the decision problem is basically given A and B, given these guys A and B, decide is, are the BIs actually LWE outputs or are these BIs completely random, okay, so that's LWE, and you know this also would be proved that, hey, if you can solve LWE for a particular f, then you can find short vectors in any ideal of this ring for the same f, right, so this is what lattice cryptography over polynomial ring looks like, rings looks like, so if you can solve SIS, then you can solve SVP over ZX mod f of X, if you can solve LWE, there's a quantum reduction to SVP over ZX mod f of X, so again the important thing is that this f of X is the same in the average case problem and the worst case problem, so this kind of leads us to a natural question is, are all rings equally hard, does it matter which f of X you put in, so there is a reduction but does the f of X matter, I don't know, right, so there have been some results, actually recent results, this is one example of it, that show that actually you can do something for when f is XDN plus 1, that you cannot do for regular lattices, for example you can find, there's a polynomial time quantum algorithm for approximating the short vector with a sub-exponential, you know, not a great approximation but a sub-exponential approximation, and but sort of the result is, but the complexity of ring LWE is still unchanged, so it's a sort of result, sort of, it's based on the underlying assumption, they're attacking the underlying assumption, as far as the average case problem we don't know, so it's very interesting but we're still not sure what it means, so the question is like, is this just easier ring, this polynomial, or is it just a polynomial for which it's easier to find an attack, right, so if maybe if we think harder or they think harder, you know, maybe they will find it from any f's, they will be able to do this result for other polynomial, maybe, I don't know, right, so we actually don't know but, you know, what would be much more preferable is if you have a scheme based on the hardness of lattice problems in every ring, so here's what I mean and here's a result of this paper, so what we can show is that you can define an average case problem, so for example, so it sits over the polynomial ring without modulo f of x, and if you can break that, you can actually solve the shortest vector problem over zx mod f of x for any f of x, okay, and furthermore this problem is actually useful because you can construct all these things here, right, so this is a bit, you know, more of what we want because it, you don't really have to commit yourself to sort of knowing which worst case problem is hard, it's sort of based on every worst case problem, alright, so now I'm gonna give you an amazing, so okay, to be honest, maybe this sounds interesting, you can maybe I set it up, okay, but this is not an amazing result, what would be an amazing result is this open problem, is if you can get some problem and then you can, so that's sort of, do the LWE instance, right, if you can get some problem and SVP over this would be hard for any, you know, for sort of a random instance of this problem, which doesn't depend on f, and SVP over this, in these rings will be hard, ideals in these rings will be hard, and then from this problem you can build all of this stuff, so of course you can do this from regular LWE, but I want it to be more efficient than LWE, so if you can solve this, that would be quite amazing, because then you would get all of this stuff based on the hardness of problems in any ring, and this is, I really, even more importantly than explaining to you my result, I wanna sort of direct you to this problem, because I think it's a really great open problem. I've, I mean, I tried, but I, I have no idea, so everything I try, not even, I can't even just not prove it, I actually can find an attack, so is this possible? Maybe, it's hard to imagine why this should be possible and this not, but on the other hand, I have no idea, so if you have, you know, if you wanna work on something, I think this is quite a, quite a nice thing to try to do. Okay, so let me, let me sort of explain this part, though, the result of this paper, and maybe explaining why, you know, this doesn't sort of naturally just carry over to this. All right, so I'm gonna define this other problem over just regular polynomial rings over the rings zx, and I call it z less than or equal to, less than n x sis d. Okay, so first of all, I'm gonna define this, this set is, these are all polynomials in zx with degree less than n, so that's what this notation means. Okay, and so now there's this, this z, whatever, this new sis problem is given k random polynomials in this ring. Okay, find polynomials with small coefficients in the same ring but with smaller degree, possibly d smaller, so that the sum of the products is zero. So it's exactly the same as the regular sis modulo sum f, except there's no reduction mod f. Right, so, and now the kind of, it's easy to know, but you can see that there is a reduction, I will show you actually now the reduction that if you can solve this problem, then you can solve the reduction, the problem, the f sis problem. Right, so let's say we're given an instance of f sis for some polynomial f, and this just consists of k polynomials. What I'm first gonna do is pick random polynomials just to make sure, what I wanna do is I wanna, if this, if the degree of f is small, I wanna make it bigger. I wanna, you know, if these are random polynomials, small degree, I wanna make convergence random polynomials of a big degree. So I'm gonna use these r i's to kind of fill out these polynomials. If you, you actually can skip this step if you assume that d equals n equals degree of f. Okay, then I set bi equals ai plus ri times f, so I wanna, I wanna make these bi's, the input to my new oracle that solve this problem. So these are now uniformly random here, because ai is a uniformly random in the ring mod f, and then ri is a uniformly random in something else, and then this thing is uniformly random, and then I give this guy to the, to the solver, and he gives me some solution so that this equals zero, but not mod f, and then I just reduce mod f. And then this is zero mod f, and if f is, you know, has the right expansion factor, whatever, doesn't matter what it is, then it's actually gonna be a solution mod f. And since the degree of z i is smaller than these guys are non-zero. So actually it's quite a simple reduction. You just almost take the input you're given, give it to our new oracle, and, and just reduce mod, reduce his result mod f. Now, the reason that this works, and this is the main observation why this reduction works, is that the input of f cis, the input that we get here is, has nothing to do with f. Right? It's just a bunch of random polynomials of a particular degree. So it only cares about the degree of f. So it's actually not important. So just to, and just to see what, you know, how the, in pictures, what this function is, this comparison between the old f cis function and the new one, is here is f cis. Right? So for example, you know, multiplication, sum of product of three polynomials, you multiply this by something, right, equals this. And here's the new one. Here's the multiplication over z x. So it basically just doubles the, even less, doubles the range size. So it's not as compact. Maybe it's, but it's, but it's not so bad. It's only twice as long, for example. So it's, it's not a terrible thing. So now, the question is, what can you do with this z x cis thing? Can you base, so like, if you can solve the cis problem, you can find a small solution, so this equals zero. The previous sort of reduction shows that, oh, then you can solve ring cis for NEF. Now, can you build something based on this? And actually, yeah, you can. So for example, you know, the most interesting application of cis is maybe a signature scheme. Most advanced thing you can build with it. Maybe slightly different other ones, but I think it's okay. You can stick with signatures. So, and the construction is quite similar. So it's the same sort of Okamoto style, I would say, construction digital signature schemes that we've had before from lattices. So the secret key is a bunch of vectors with small, with small degree, not to, you know, small degree and small coefficients. The public key is random vectors of a slightly larger degree in z p x. And there's this other part, the sum of the product of these vectors t in this thing. So the important thing is, because we're using like an Okamoto style proof, is that an adversary, there exists another SI, another like S1 through SK prime, S1 prime through SK prime that equals to t as well, with some product with A equals to t. And so basically we will extract that from the adversary. So the signature looks quite similar as before. So if you're familiar with the previous lattice signatures, it's fairly similar. You just have to kind of keep track of the degrees to make sure that everything works out. So you just pick a bunch of YIs. These are your masking parameters according to some Gaussian distribution. You compute, or if you're familiar with discrete log, it also looks the same. So you compute the hash of the sum of the products of A I Y Is. Then you set your ZIs equal to YI plus CSI. Then you've, you know, the problem with lattices, the distribution of the ZIs is not, it kind of depends on the S. You have to do some rejection sampling, maybe restart the procedure. Okay. And then you output the ZIs, basically, NS. So it's not particularly, there's nothing particularly innovative here compared to the previous signature schemes. So the verification is, again, simple. You just check that all the norms are small and that this procedure is satisfied. Okay. So now the security proof, and this is sort of, there's one important line here, is that it's the same as an Okamoto-style digital signatures. Given the AIs, right, you want to solve this new SIS problem, create a valid public, the other part of the public key, create a valid T. And now with high probability, there exists another SI prime, which you don't know, but you will find it. Then you use the SIs to sign when the adversary asks you to sign. And then from the adversary signature, you can recover basically something A1 times something small, plus, etc., plus AK times something small equals zero. And because the adversary does not know which SIs you used, this with high probability is a non-zero solution. So basically with high probability this is non-zero. Okay. So we kind of made an extra line, so now that's, I didn't even know what it says. Yeah, okay, that's what I said. So with non-negative probability, the coefficients are non-zero. Okay. So that's sort of the whole proof. But again, the main part, the sort of the main thing that we used that is that there exist two solutions. There's S, the one we know, and the other one. So this is what I mean by Okamoto-style rather than the Schnorr-style signature, digital signatures where there's only one solution. Okay. So now if you set the parameters, right, so we have this better hardness assumption, but it's actually, you know, I would doubt that anyone would use this because it is, for example, for regular lattice signatures you have public keys between 1 and 2 kilobytes, secret keys 1 kilobytes, signature size between 1 and 2 kilobytes, and for the sort of the same security parameters you get something that's off by a factor of 10. Right, so, you know, it's kind of hard to argue that in practice somebody would use something that's off by a factor of 10 to gain, you know, security that may, you know, may not even be necessary. Right, so what's the problem, right? Why is this so much less efficient? It's not because we don't do the mod F, that only expands it by a factor of 2. So this, maybe that's okay. The main problem is that these guys are not just based on ring sys, they're also based on ring LWE. So, when you create the secret key, in some sense there is a unique secret key for every public key. So you don't have this collision thing that you rely on. You actually have this indistinguishability thing you rely on, and that's, I mean I won't explain why this is better to do, but for lattices this is the sort of the right thing to do. Okay, and this is what really results in the lower public key set. So if you were able to get something that makes this look random without reducing mod F, then you would actually have very comparable parameters based on a hard problem in every ring. So sort of to iterate, reiterate, if you remember something from the talk, is like solve this problem, right? Figure out if there's some way, you know, to define a problem and make it hard, as hard as sort of SVP over any ring, in lattices over any ring, and then you know, build things on this problem. And of course LWE, regular LWE, is this, is one of these, is a problem, is this some problem, but we want it to be more efficient than LWE. So I think if you can do this, that would be quite a nice result, and then maybe people will say okay, you know, you're paying a factor of two maybe in the size of the parameters, but now you have sort of, you don't care about which ring to use. So maybe that could be convincing. This, even I'm not convinced, but I'm definitely not convinced. This is too much of a price to pay for, yeah, for not being sure. For, you know, there is really no attacks against, you know, no attacks yet that would make us want to pay this price. But really I hope this leads to somebody, you know, solving, getting ideas to solve this problem. That would be really great. Thank you. Any questions? So I wonder what is the value of this parameter K? So in your first slide, K seems to be, you know, the little K below. Oh, this one. So, okay. So it seems to be equal to two in your first slide. Yeah, yeah, so for these, for these guys, for the thing we do, it is equal to two for current ones. For these guys, so, but that's because we're basically going to ring LWE. For ring Sys, you can set it to be sort of, you know, maybe, so I think when I set these parameters, it was five maybe, four. But I mean, I didn't try really hard to optimize this because I knew it was going to be much worse. So K is like five or something. Why, why is K five? Oh, because you need, you need, you needed this, right? You needed there to exist more than one Si, Si prime so that for, you know, it has to be a collapse, you know, my out, my input so that there's two possible inputs for every output, right? And if K is two, then I would have to have my inputs be much larger. So by making more AIs, I can have smaller inputs. And this is good, better. I mean, you have to find the trade-off. I mean, Y five and not four, this I don't know, right? But it may be four or six, right? But Y five not two, I can sort of tell you. Okay, thanks. Well, there's probably more questions, but in the interest of coffee, I think we'll settle up now. So thanks for doing, and thanks to all the speakers in the session.