 Hello, everybody. Let's jump right into it. This is work revolving around how efficiently we might find shortest vectors in lattices when we consider a particular class of algorithms called SIVs, and it's with these wonderful co-authors you just heard about. So, what is a lattice SIV? It's a particular kind of algorithm that takes a description of a basis, a description of a lattice in the form of a basis, and you go and make a cup of tea, and when you come back, perhaps it's found for you a selection of short vectors in that lattice. And what we mean by short vectors is vectors that are not zero and are short in the Euclidean norm, and outside of just liking these things for the sheer interest of it, these vectors are somehow a critical quantity when it comes to lattice cryptanalysis, and we'd like to be able to say reasonably concrete things about how easy they are, or indeed hard they are, to find in given instances. And then to say something sort of more on a very high level about these algorithms in a dimension D lattice, they require exponential time and memory, and they're indeed one of a number of algorithm classes now which have this exponential time and memory in a dimension D lattice, but really the other competitive branch of the algorithmic field is the enumeration algorithms which have this super exponential time complexity but effectively no memory complexity. And a question that's interested the community for a while now is where do these asymptotes kick in, or maybe to say more explicitly, for given implementations, where does the better time complexity of sieving sort of mean it starts performing faster than enumeration? And so I was worried that I'd be the third person to define a lattice to you, but I think they were all defined in different ways, so for me, I like to ask the audience perhaps for some linearly independent vectors in space, and then I take that integer span, and the points that this gives me, the set lambda becomes my lattice. And indeed any other set of linearly independent vectors that gives me the same span is also a basis for this lattice. So this red pair, these are a basis for this lattice, this cyan pair, they're also a basis. And from the cryptanalytic perspective at least, we have a notion of good and bad bases, right? So a good basis is one formed of short vectors, which are reasonably close to orthogonal, so maybe the red basis is good here and the blue basis is not. And they're good because certain problems we're interested in, we can solve quite exactly with a good basis and we cannot with a bad basis, or at least not without doing some significant quantity of work. And one such problem that was mentioned in the previous talk is this closest vector problem. If we take some point in ambient space, so not one of the dots on the screen, ask what's the closest lattice vector to it, then with the red basis you can solve this reasonably exactly. And how might we go from bad bases to good bases? Well, if they contain short vectors, perhaps one way of doing this would be to find short vectors. And to concretise this problem, we have on screen a definition of the exact shortest vector problem, so I'm asking for a vector that's not the zero point and I want it to be shorter than or as short as every other non-zero vector in the lattice. And I've alluded to earlier one way of doing this is through a class of algorithms which we call lattice sieves. And so modulo a very large quantity of algorithmic design differences and elaborate and elegant ways of reducing time complexity at some point such an algorithm has to have sampled from the basis exponentially many vectors in your lattice, so this is where the space complexity comes from. And it simply asks the question, given a pair, if I take there some of the difference, do I receive something shorter than one of the summands or difference? And if I do, replace it, and once I've done this for all of the pairs, perhaps I have a database of shorter vectors, but it's a smaller database and I repeat, I iterate this process however many times, and if I started with enough vectors and I waited long enough, then I solved the shortest vector problem. OK, a lot of technical detail is of course missing. Right, and so, but actually the more astute of you in the audience will perhaps notice that we solved the shortest vector problem and many other vectors appear as well. And these vectors are much shorter than one could, for example, sample using a bad basis. And here in sort of lies a problem because traditional lattice cryptanalysis, we're thinking algorithms like the BKZ algorithm and so on, really only have a way of using one or some small constant number of short vectors at a time. And yet this particular class of algorithms outputs an exponential number of short vectors and if we don't somehow come up with a way of using them, perhaps we've wasted some work. So we really want to be dirty hippies and be as ecological as possible about this. And so, two previous works, which I won't speak about quite so much right now, but both of them somehow sieve in a smaller dimensional lattice and use this work to either sort of solve the shortest vector problem exactly or somehow sieve future sieving operations and therefore make them quicker. And this is sort of the intuition that we want to take from this. We want some general framework, perhaps, of recycling information where information of short vectors and we want to recycle it between related lattices and we really need to go beyond this idea that a sieve is a box and we press a button and then we take a short vector. It's really a box on wheels which carries a sack of short vectors on its back. And implicitly what we're doing is therefore trying to treat one of these sieves as a sort of stateful machine which we describe a series of instructions on. And that's one of the contributions of the work that I intend to interest you in today. We come up with a framework for treating sieves as stateful machines and describe some instructions that we think are useful. And then we implement an open source version of this which we encourage you to take a look at. Well, really this is a lie because these two steps happen the other way round. And then given those things in hand, we decide that we want to come up with a variety of strategies for cryptanalytic tasks that somebody might be interested in. And then having done that, we're able to show some interesting practical things. For example, we show that in reasonably low dimensions, the asymptote of the time complexity of sieving kicks in and we're able to solve that exact variant of the shortest vector problem quicker than enumeration which is something a little new. In our paper, and I won't speak about it so much today, but we're able to show that this problem of solving the shortest vector problem is something we can amortise within a very useful lattice reduction algorithm and we go on to break a number of records. And so may I introduce to you today, ladies and gentlemen, the general sieving kernel or, as we call her, Jessica. And I'm going to perhaps try and intuit to you how one might use these instructions to build a sequence of evermore sort of subtle and useful sieves. And to do this, I'm going to try and use a consistent grammar and a little key in case you forget or more likely I do. And so this first operation reset, it simply empties our database of any vectors that might be in there. And the two subscripts tell us just exactly how much of a basis of a lattice we're going to consider at once. So in this simple sieve, we're considering every basis vector and then S, unsurprisingly, might stand for sieve. And so this is a procedure where you sample exponentially many of these vectors and then do pairwise comparisons and iterate it some number of times. And then this I instruction with a subscript stands for insert. And when it has a subscript, it means insert in a given position. And so in particular, why might we insert? We've, OK, insertion is maybe a bit of misnomer, but why might we insert? We've done all this work to find a short vector and now we want to make a good basis. So we want to somehow put it in this short vector in our basis. And so this is really the simplest sieve I think I can possibly come up with. And then a very elegant sieve, a due to La Hovan and Mariano at Perth Quantum Crypto last year, sort of is this idea of sieving in a sublattice and using that work to seed subsequent sieving operations and making it faster. And so immediately there's a difference, right? We start and somehow we're only considering one lattice basis vector. And we have this new instruction extend right, which means, OK, sieve, next time you do a sieving operation, include also the next basis element to the right. And then sieve. And so this operation will give us short vectors but only made of combinations of b1 and b1. And we extend right and we sieve again. But this operation of sieving, we're sort of starting not from scratch. If you imagine the picture from one of the first slides, we're effectively starting that sieving procedure, but we've got some short vectors already. And you can maybe convince yourself that a shorter vector is more likely to shorten other vectors. And you continue this procedure throughout your entire basis. And yes, you still have to sieve with respect to all of the basis vectors, but by the time you do this last sieving operation, well, you seed it somehow with a very large database, including short vectors made from all but the very final basis vector. And while this doesn't change the asymptote, this gives some significant practical speed-ups. Oh, and then, of course, we want to actually do something with this work so we insert it at the end. And then we might move on to the dimensions for free sieve from Ducat at Eureka last year. And what this manages to do somehow is actually never sieve with respect to some of the basis vectors. So we have this new parameter f, which stands for free or for free, or as you please. And we actually start our sieving procedure somehow some way into the basis. And as before, we do this progressive extend right and so on. And what's on screen is not exactly accurate because really these basis vectors that are in red that I'm considering to sieve over, that are projected orthogonally to those that come before. And this is sort of idiomatic when you think of using sieves and things like BKZ. And so, in particular, the lattice vectors I have in my database right now are not actually lattice vectors of the lattice I'm considering. I need to somehow lift them or undo these projections. And we have a mechanism based on Babisyn Eris Plain for doing this. And what we do when we've come to the end of our sieving procedure, we simply take our full database and we lift it now in blue. I hope you can see that, I guess, over the full database. And if you sort of follow the mathematics in this paper, you choose F not too large, quasi linear in D, and you do a certain amount of preprocessing, then you expect maybe to find the shortest vector here and then you insert it. OK, so I'm ready now to tell you about the workhorse of our lattice reduction libraries, which we call the pump, because it has a sort of pumping motion. You pump up and down and do this some number of times. And some things are immediately different. So, for example, ER has magically been replaced by EL, which stands for extend left. And the I has lost its subscript, which is something I'll come to in a minute. We have to extend left rather than right, simply because we came up with some neat algorithmic trick for reducing the average length of vectors in our database and it sort of breaks extend right, but that's not such a big deal. But we instead sort of progressively sift to the left and this extend left operation is implemented using a by-lifting technique. And so, yep, we're doing this progressive sieving operation. But now, this lifting operation I was talking about is sort of inherent into the sieve. And what do I mean by this? I mean that every time in the sieving procedure, so every time we're comparing pairs of vectors to see if there's something shorter, if we find something good, we actually take the lift of that vector all the way to all of our bases. And by doing so, we're able to keep a sort of a list, a candidate insert for each position in our basis. So at any point in the sieving procedure, we should have a list of the shortest thing we know how to insert at a given point. And by doing this and continuing to sieve progressively left, by the time we finished our pumping up phase, hopefully we've sort of lifted a great deal more vectors than we would have done if we just sieved to here and chose to lift at the end of this procedure. And then we finished pumping up, and now we're going to pump down. And I said that the I didn't have a subscript, and this represents the fact that kind of classical lattice reduction in cryptanalysis is very... Well, some of it, not all of it, is very regimented in where it chooses to insert short vectors. But given that we keep a sort of a list of insertion candidates for various points in our basis, we decided to let the sieve more organically choose. So, for example, you might have a very good insert for B3, but only a particularly unimpressive one for B0. And it doesn't seem reasonable to always force you to insert in a given position. And so in particular, we just optimise over some function and then choose where to insert. And then we pump down, so we sieve again. But this sieving procedure, because it has the lifting built in, it sort of potentially refreshes all of our inserts and we continue inserting and sieving down until we're back to where we started, except not entirely. Because we're back to where we started, except we've potentially made a great deal of insertions. Insertions mean vectors are getting shorter, which sort of means they're getting closer to orthogonal. And then this lifting procedure, we chose it because it's very good, but also the more orthogonal the basis that you lift over is the shorter vectors it has the potential to find. And if you were to iterate this procedure once more, you might... You wouldn't necessarily need to increase the dimension of your pump to find newer shorter vectors. And so this is the workhorse of our lattice reduction strategies and you can combine it in all kinds of wonderful and wacky ways. But I just want to give a little overview of Jessica and what we did on an implementation side before I finished with some practical outcomes. And so Jessica has three high-level design principles. There's this desire to recycle short vectors between overlapping lattice contexts. So this is extend right and extend left and shrink left, which you didn't see. There's this idea that we should lift vectors throughout the sieving procedure to high-dimensional lattices and sort of keep the best candidate we know how to insert at a given basis vector. And then because of this, we're able to a posteriori decide whether to or whether to and where to insert a vector. And so these are sort of the... They're really all quite opportunistic. And on the implementation side of things, we implement a sort of single-filted-level version of the Beca Gamers use-if and a triple-sif, which is a line of work that allows space complexity and time complexity to be traded off. So in particular, the triple-sif we implement in Jessica, you can parameterise it by how much memory you're willing to give it. And then the final sort of piece of the puzzle from the implementation side is we make use of a whole host of algorithmic tweaks. So, for example, it used to be that after insertion, you couldn't sieve again. And I think the most important of these is the X or pop count trick, which replaces a great deal of inner product calculation in our library. So it turns out that when you want to compare whether the sum or difference of two vectors is shorter than one of the summands or different, you effectively have to take it in a product. But if you come up with a pre-filter, such as a Zore pop count pre-filter, it allows you to avoid a lot of that complexity. And we came up with a particular nice generalized way of doing this. But please see the paper because this deserves much more time than I'm giving it. And so on to practical outcomes and records. Jessica, a workout in the red stars, a workout is obviously a sequence of ever more strenuous pumps. And this is our, thank you, this is our sort of the thing we expect to solve exact SVP. And the blue dots are BKZ with pruned enumeration in the FPLLL library, which is, I think, the best publicly implemented enumeration library. And we can see that around dimension 70, these implementations cross. So that's sort of less than 10 seconds and certainly in much lower dimensions. And we're interested in cryptographic purposes. And then from exact SVP, which is an exact notion of the shortest effect problem, you have more sort of approximate or heuristic notions, such as the Hermite SVP, which is what one might use in practice when doing lattice reduction. And we didn't know that this was going to be at Darmstadt when we submitted, or maybe we did. But we broke some Darmstadt challenges along the bottom axis, the x-axis of the dimensions, and then core hours along the y in a log scale. And the red points, again, are the sort of Jessica workout strategies. And we sort of went from years on hundreds of cores to weeks on 40 cores. So this was very pleasant. And then above that, again, we go from exact SVP to an approximate notion of SVP to the problems on which you might actually base cryptography. Again, the Darmstadt challenges for learning with errors were able to make some progress. These new purple boxes are records that we are able to break with this sort of sieving-based methodology. And so a final word on the implementation, because I want to challenge you. I think far more records can be broken using this library and using these sort of ideas. I don't want to speak for my co-authors, but many of you can do much more intelligent things than I can. And while all of the heavy operations and the optimization happened on the C++ layer, lots of the algorithm design and the control and the stats gathering all happens on a pythonic layer. And so really, we think most people can jump in and start sieving away. And I think that's more than enough for me for today, so I'll take all of your questions. Questions? So when you say you want to reuse sieves across lattices, so those two lattices or three lattices, should they be in a particular relation between each other? Or should it be sublattice? Or is it possible to? Yeah, so in some cases, they're sublattices. If you're considering the original lattice basis, then you consider sublattices via inclusion. If you're doing things in the Gram-Schmidt basis, which is one of the things we do in our library, then you're considering projected sublattices. And so in particular, it's easy to take a database in a higher dimension and project down, and then this lifting operation is sort of an anti-projection, not an inverse, of course, but an anti-projection. And so, yeah, there are all of the different lattices I spoke about, and I put my hands up. I explicitly avoided quite a lot of detail. They're all either sublattices or projected sublattices of one another, yeah. Any more questions? Let's thank our speaker again.