 So quickly go who, who am I? I got a lot of slides. I won't spend a lot of time on this one, but I've been working with NSS, which, which is our security library. For a long time, I've been working on this code while working for a number of different companies. And I currently in the red hat crypto team, which is responsible for pretty much all the crypto in in red hat. Our goals for today is to get a basic understanding of how lattices work. We're going to look at some of the pitfalls that you can have with lattice. And sort of understanding of sizes. So really what the whole goal is to get people as comfortable with lattice. Understanding how, how it works as we are today with RSA 50 helmet or ECC. I'm not going to dive into all the proofs that are associated with lattice. I'm not going to dive into math deeper than what we need to understand lattice, which is basic vectors, matrices, and our simple fields that we're familiar with from RSA and ECC. And you won't actually get enough tech to actually implement the lattice, but you can actually sort of understand how it lattices work is the goal. So how we're going to do this. First, I'll tell what they are from about a 10,000 foot level. And I'm going to talk about some of the problems that we use the short integer solution. We'll talk about some some crypto based on the sort integer solution. And then I'm going to get into learning with errors, which is really the part that I think you guys will want to know that learning with errors is how most of our crypto lattice works. And this is this is the part if you don't get anything else from the talk, how learning with errors work and how we can use them for crypto will be the important part. And then as we have time, I'm going to We'll talk about three of the NIST lattice based proposals. I won't talk about Entry, because it's a lot more difficult to talk about And understand Then I'll talk about some currently known gotchas and then we'll take some questions. Okay, so what are lattices lattices are basically A bunch of points in space formed by Vectors that let's take a set of vectors and then we Create all the points which are linear combination of those vectors. So we have a one and a two. If you take a one here And double it, you'll get this point here And you take a one, double it and add a two, you'll get that point. And so you can see all of these points can be created by Multiplying a constant by each of these vectors and creating out. Now this is a two dimensional vectors. Typically we think about three dimensional when we think about real world lattices like crystals and things like that. In crypto we're going to be talking about very large like hundreds of Dimensions and the bases can be multiple hundreds of vectors. So some properties these lattices have first more than one set of vector conform the same lattice. So in this previous picture if I had to a As my base vector then multiplying it by one half would give me the same point. So if I take any X number of vectors in a given lattice that will form that exactly same lattice. There's a number of hard problems that are related to lattices Short vector problem so you want to find the shortest vector in the lattice. The closest vector problem given a an existing vector you want to find the vector that's closest to it in the lattice. And the S I V P problem which is we find the basis so the set of vectors that form the lattice with the shortest set of vectors. Now we have proofs and you can kind of get an idea of just looking at this that if you can solve one of these problems you can solve all of them, but they're believed to be hard. So one of the things in security we like to do is we like to prove that one hard problem is as hard as another hard problem and that way we know as it that if you've been pounding on all sides of these problems that this is probably a hard problem. So that's sort of our proof and RSA we don't know RSA isn't breakable but we do know that if you can break. If you can factor the thing you can find the. If you can factor your your modulus you can find the group size and if you can find the group size you can factor the modulus so we know those are equivalent problems. Okay. When we represent the lattice we can represent it either as a set of vectors, or we can take these vectors stack them together and form an array. So the vectors become the columns of the ray and the values of the vectors become the road. And that's the normal way we will use that lattice in in cryptography. So some other things about lattice. Okay, so our lattices are usually in the finite fields. That's why there's a discrete number of points in there. There's some differences between the lattice fields and the ones that we use in our normal cryptography so in Diffie helman we use a single very large field. And that's the size of whatever our keys going to be. In ECC we use two points, but the order of the field is much smaller. Maybe, you know, 256 bits or like 64 bytes. Lattices use very large vectors and very large arrays, but the field of each of those are very small. So we're talking two, two and a half byte fields. Okay. So the shortest integer problem. Shortest integer problem we take a bunch of vectors. We want to find the constants we multiply those vectors to come up with zero. And the rule for short short integer problem is that those values we multiply are some in some small set of numbers in this case, zero or plus or minus one. So we want to find a set of vectors that non trivial vectors so all zeros obviously multiplied by zero. And a non trivial set or at least one of them is one. This problem turns out to be a hard problem. Again, we can write our sets of vectors. We can stack them together and create a matrix a. We can take our problems that were our set of values that we're trying to multiply by, and we form a vector Z. This can be written as z times a equals zero. And as it can be seen this, this is actually a lattice. If you take, if you take any z instead of just restricted set, you can see that a times the forms a lattice. And there are proofs that this problem is also yields a solution to the shortest integer problem you can probably divine that just looking at it but there's an actual full proof. So that means we know that shortest integer problem is is one of our hard problems, or the other problems are are are not hard anymore. So how can we use this to create a signature scheme. So we take a secret key, T, which is trapped or function that's used to find short vectors so T would be say we create a bunch of short vectors to create a lattice. And then where the values in those vectors are are are small integers. And then we generate a from it so remember multiple, we can define multiple lattices with different vectors so we create larger vectors that are within that T, and that publish that as a and a is our public key. In signing, we can use this set of short vectors. Remember the short, finding the short set of shortest vectors is a hard problem but we started knowing what the set of shortest vectors and a are. We can use that to calculate z, which is sufficiently short, short to satisfy a time z equals the hash of the message. We can verify that we calculated a time Z. We multiply a time Z make sure it matches the hash, and we verify that Z is sufficiently short. Now there's some caveats and doing this. As we use sign more and more Z can provide information for the attacker to figure out T if we're not careful. There's multiple Z's we can pick and we have to pick a Z based on distribution that does not tell the attack or anything about Z. Big hand wave, but it's Derby dragons here and this is one of the things that that could trip up a lot of space system. Okay, so now we're into learning with errors. So the idea comes from learning systems where you may have noise on your input that you're using to try to train your learning system. So unfortunately for learning systems, it's hard to figure out what the original signal is even if you have a little bit of noise. And so if you have some vector S, you take an arbitrary number of values B and vectors A. So you multiply S times A and you add some air to it and it produces B. The trick is can you find the original S given a bunch of A's and B's that's called the search problem. And can you the other problem is can you take a bunch of A's and B's and determine that there actually is an S back there that that will generate them. And that's called the decision problem and both of these problems are considered hard problems. Now, like the SIS we can change these vectors into matrices. So I can take a set of AI vectors stack them together and it becomes matrix A. I take a set of BI scalars. They become a vector B. So I multiply my vector S times a add an air vector and get B. And this is the base learning with air equation that given this S and a and even just a small e. It's very I'm sorry given B and a with this small e. It's very hard to come up with S. And in fact now we have proofs that solving this is just as hard as trying to solve the SIS problem that I presented. Which in turn is just as hard as solving any of the matrices problems. So the learning of their problems we now know is as hard as the base lattice problems and it's hard in both normal computational work and classical computers and in quantum computers as well. Okay, so how can we use this to do crypto? So I'm going to spend a little bit more time on this slide because this is the basis for all the LWB based algorithms we're going to look at. And so this is where you'll get the intuitive understanding of how LWB works. So we're given a is a trusted uniform array. A is generated by some sort of if they's generated by some sort of structure, like we did with SIS where we had a T and generated a the scheme is not secure. Because the person who has T can can solve the SIS problem in that array on you and and break the scheme. So it's important that this is generated in some way that we know some someone else doesn't know T. Okay, so Alice generates her are our vector, which is the S vector in our previous slide. And then she generates an air vector EA. Then she calculates you are a plus EA and sends you to Bob, she can now throw away EA. Bob generates S and EB and he calculates V equals AS. Notice that the S is on the other side because we're doing arrays. Rays are not a not communicative like we're used to so we have to deal with that in in all of our crypto adds that air vector and sends V to Alice. Alice calculates K equals her are times V and Bob calculates K equals. U times S, which is the dot product. So, this value is approximately equal to our times a times S and where they're different is where these airs times your back, your, your public keys are. And if we make the air vector small and the S small, then these two, these two things are these two approximately equal things, we can, we can correct the air by using some air correction schemes. So, this is the note, all LWB stuff has some way of being able to correct these particular air operations. So, unlike our original ones that they, there's not exact numbers. But this is what makes lattice safe for. For quantum computers, while still recating sort of our get the get the helmet like exchange that we're used to. Okay. I'm going to go over the next ones. There's a fair amount of math and the next couple ones. They're not important. I'm just going to give you a feel for what we but some of the things that we do in real life in in lattice operations. So, there are a couple issues we have with the lattice, the classic lattice learning with their issues. One is the race sizes tend to be large. And those vector multiplies those dot products and vectors times arrays are really expensive to do. So, we'd like to be able to come up with solutions that don't that don't have such large keys. So, one solution is instead of doing a vector times an array, we're going to we're going to use polynomial math. So, you might recall in in in the binary forms of ECC, we have this method where we turn numbers into polynomials. You know, polynomials are just, you know, some x sub variable x, x times a to x square times a one. If we had another one to be x, you know, a x times a three, or a x times x three. So we can map a number into an actual polynomial here. And then we can do polynomial math on it, we can multiply them, we can add them together. And we can do them in a, what's called a ring, we just divide out by something. So, the difference between rings and fields is a math term, but they look kind of like fields to to to you but they're, they're not full fields. So, our base equation now becomes this B, which is a polynomial, a which is a S, which is a polynomial, a which is a polynomial in E are air, which is also a polynomial. So we just multiply s times a doing using a polynomial multiply. We add an air to it. And we get B, and we do all these against some ring which allow keeps when we multiply two polynomials together we get higher order polynomials. We just do a division that will allow us to to reduce that polynomial back down to the same size. And it turns out that it's very quick to be able to do polynomial multiplies, if you've done a transform called a Fourier transform on these polynomials. So, most systems do some sort of number or polynomial transform, and we can choose q. So all of the, the actual math happens in a full, full field. We can that q of that field makes that transform quick, and I'm going to just hand wave at that. Okay, so here's an example. I won't spend a lot of time on the slide but this is how you do a polynomial multiply. We're doing everything in a field this case, my q equals five. And this is this q is, is actually fairly small. It's actually usually two to the something a two to the 12. And then, so we, we have a and B. We transform a and B into polynomials. We do standard multiply on those polynomials. Then we divide by our modulus here. This reduces down here and all our math, you notice how, you know, four is one, that's because six, that's because three times three is six mod five is one. So everything still happens mod whatever the the field is there. Okay. So, another optimization is when we're doing rings stop, implement L w e. We call that ring L w e. There is a mapping between ring L w e to s is but it's only for a special case so our normal proof doesn't apply so there's not a full mapping between ring L w e. The proof that, that our problem is as hard as s is. So this is a little bit of ithiness in using ring L w e's module E w e is like ring L w e except we go back to using a vector which is a single polynomial times an array which is a list of polynomials. A becomes an array again, but this time the ray is a ray of polynomials. And with this we can get back to something that looks like our full proof. And so another optimization we can generate the a the module from a seed so we create a seed value and we can generate it by using a standard PRF or basically a random number generator that's seeded. When we do that, we can get smaller keys because we can just send the seed out in each person calculate some modulus and two of the list finalists use that in their key module key exchange. Okay. The other sort of refinement is learning with rounding. By creating an air vector instead, we round the coefficients at the bottom of the screen bottom of the the modulus and send the rounded coefficients in the protocol. So that allows us to shrink the size of the key by a few bits. And it do bits per entry, which strings it actually a fair amount. And it also helps us to handle the air case, because when we finish our calculation we will round out the bottom again, and that will round out any errors in our in our equation. Okay. So, let's see how many of these I can get through this. We're going to talk about the NIST actual NIST algorithms that use these things. So, first lattice, one of the last finalists is called chrysalis kyber. It's a module LWB and Q is 3395 for all of their implementations. This is two to the 12. So it's, it's a number that can be reduced mod because it's got a very few a few twos in it. It's also a number that has two to the eight times a value, which means we can do a very fast number transform on it. The order of the polynomial, this is how many entries in the polynomial there are. So there are 256. So it goes up to x to the 256. There's 256 different values in our polynomial. Eight is how big our areas and you can see that our air is very small. We don't have to add very much to be secure. So eight is two. So our air values are going to be vectors of zero one and two. K depends on the security level. It's the only thing in these parameters that depend on the security level. So you can scale this up simply by scaling up K or the number of and K is the size of the array. So K tends to be something like two or three. So our vectors and arrays are our two or three polynomials. So the polynomials are large, but there there's only three by three or four by four or something like that. Okay. So how do we, how does it work? Well, we generate random A. We generate a random s. And then we generate an E with the coefficients less than eta. We multiply a times s, add our E in and get our T. This is just like just looks just like our standard base lattice equation here. Our public key becomes T and a and our private key becomes s. Okay. So the encrypt operation. We generate a random R regenerate to random ease. And then we generate on you just like the Bob half of our Diffie helman exchange. And then we do something called decompress which shifts the message up out of what we're going to truncate. So we're going to, we're going to do like an LW ER, we're going to truncate the last several bits in our calculations and that will be our error correction. So to make sure that we don't lose any of the message decompress actually moves the message so that the bottom parts of the message are zero right now. We add that to an air which is going to which is fits in that bottom half and is going to disappear. And then we calculate T times R T was our private key transport transpose is what makes everything work as far as vectors go. And ours what we generated so T T to the T are is that K value that we would have calculated in the case in the Diffie helman case. And now we send you and we send V and we are we compress them which is dropped the bits out. So as a decompress and compress basically truncates bits. So there's a little bit of LW ER as in here as well as LW R as well as your standard LW EE going on here. Okay, so D decrypt. We simply decompress those two values that came to us. We multiply our private key by you and that gives us that T value. There gives us that that that K value from our Diffie helman. We subtract it from V and that gives us our message, except it's got some errors in the bottom, and then we compress to shift those errors out and get our original message. Okay, and here's the parameter sizes. Delta is how how often this fail so we're shifting these errors at the bottom sometimes the errors become big enough to creep into the main message and we will get the wrong message and and the operation will fail. So we want to make sure these air cases are large enough because too many failures can be used to attack a lattice based system. The other thing to see here is the size of our keys. So our public key is around 800 bytes, not bits. So we're looking at roughly eight times RSA here to two to four to eight times RSA in in size. So lattice is larger than any of our others, but they're still small enough to work in our protocols. You can see that that sending 800 bytes or 1500 bytes and most of our protocols is something that that they can handle. Okay, we'll go over Saber very quickly. Saber is very much like our previous one. Q in this case is a binary field instead of a prime field. So just like we have prime versions of ECC and binary versions of ECC we have prime versions of lattice and binary versions of that lattice. P and T are rounding factors basically that we use and we choose them so that Q is much larger than P which is much larger than T. Q and E for all the ones that are defined are the same for all the security levels just like we were talking about in Kyber. Orders the same as Kyber, 256. L is the length of the vectors, modulus is L times L just like K in the Kyber case and those sizes are pretty much the same. We have a constant, we have two constants that are defined here and in a constant vector. These are basically rounding constants. So we add these to our operations before we truncate and that causes a rounding. Okay, in this case we generate a seed called A, seed A, and then A is generated from that seed. So this is our compression that I talked about before. We generate a random R. We use R to generate S from a binomial distribution. This is the handle R, S, S issue we were talking about I mentioned before. I'm going to hand wave at that some at this point. So this is our basic equation A times S plus H which functions as the air equals B and our public key becomes seed A and B and our secret key is S. So we basically do the same thing we did before where we created B prime. The same as U in the previous case. So we're going to calculate V, which is the same as K in the Diffie-Helman case. We're going to shift up. This is just the math for shifting up, decompressing M at our rounding factor and mod P reduces it in one step. And we shift them down by P minus T. That's the compressed step. So very much like our previous one, except it's depending only on rounding. Decrypt is pretty much that same operation. And, and there's the sizes now. So how, how is the questions, I'm going to ask Monterey, how's the questions going? Are there several questions out there? Because I have about four minutes to, okay, hearing nothing from the moderator. I'm going to just keep plowing on. I'm not going to go over the details of the lithium because we're running out of time. But I will go on how it's basically how it works. So we do the same type of generation. This looks very familiar. S has to be also below our A to value, the size of, so S are also very small in our signing operations here. The way we sign, we generate a random I with small coefficients. We multiply eight times Y. And then we take only the high bits. This value just simply says how many high bits to take. And that's W. We create a hash value, find C. C has to have a few, very few bits. And if it doesn't, we, we go back and find a new Y. We check C with the very few bits. We now calculate Z, which is our Y value that we chose times plus the hash times the private key and that Z. We check to see that Z is small enough to meet our requirements and that the low bits of A Y minus C E is small enough to make sure things don't overflow. And if either of those cases aren't true, we set Z to invalid and we loop through this again. And once we've got a Z that meets all these requirements and a C. We return those as our signature. And to sign, we simply calculate C times the public key, the T from the public key, Z times the array, take the high bits, that's W prime. That should equal W. And we recalculate C based on the hash of the board with our W prime and these should be valid and Z should be smaller than Z should be smaller than our threshold. That's why it works. And optimization sizes. Okay, and we want to avoid decryption errors. So, so one of the gotchas, okay, so we wanted to avoid decryption errors because this acts as an oracle think RSA Bleckenbacher for lattice. We need to make sure A is generated properly think PQG generation or the bad primes from Diffie-Hellman or DSA. When using SIS, we need to make sure that Q is out of a proper random distribution think properly choosing K when we're doing DSA and EC DSA signatures. We need to avoid side channels just like we had to do before. And then we need to keep our eyes out for new attacks. Sorry to interrupt you. We have one question in the chat. I will read it. Okay. Okay, so it's let this base cry crypto considered resistant against quantum computers. If so, then why exactly are the computation so much harder than in ECD. Yeah, yes. The whole purpose of lattice is to be secure against quantum computers. That's why we're interested in them. And that's why we're willing to take on these complex complexities over ECC and larger key sizes and ECC and RSA in any case. The interest of lattice is it's one of the smallest and fastest of the quantum secure algorithms that we have.