 Maybe not somebody know Sorry, but the clicker doesn't work is it oh, okay Okay, sorry about that Okay, thanks for the introduction here So why the previous talk was more about improving actual lettuce reduction what we are targeting more is the analysis of lettuce reduction, so more like getting more into the How can we predict what an adversary can get out of lettuce reduction and? before stating our results, I want to take a step back and Say a few words about what we actually require from algorithms for crypt analysis in this case in particular for for lettuce algorithms and and I would argue there's a Essentially three requirements one is obviously you want the reduction algorithms to be practically performant This allows us to play around with them in smaller to moderate dimension and see how they behave We also want them to be asymptotically performant because that Makes sure that nothing weird happens in larger dimensions where we're not quite sure where we can't run the algorithm anymore but These two requirements are pretty standard for any algorithm essentially, but there's a unique requirement We have in crypt analysis, which is saying that we want as an average case prediction Which is as simple as possible so that somebody who's designing a cryptosystem can apply it easily and Yeah, I can sort of figure out What an advert how long an adversary needs to run an algorithm in order to break his cryptosystem and? One might think that the first to actually imply the last one But this is unfortunately not true, but you can use the first two properties of an algorithm to guide you through to an average case prediction So let's see what we've got in current algorithms. Well, we obviously have BKZ last talk was about it BKZ and BKZ is widely deployed and use a lot mostly because of its practical performance. It performs really well in practice You run it and yeah, it's it's really nice on the other end It's asymptotic performance is actually pretty good, too. So by now there has been a lot of work in it in the last few years and We know we can do a little better in the asymptotic worst case But it's roughly in the same ballpark. So that is not too much of an issue However, it's average case prediction is still it's still quite notorious. It's The state of the art right now is to run the simulator Where you generate some some typical input to it then you run the simulator and then you look what comes out The simulator is based on some really well-founded heuristics but also on some heuristics, but it's not quite clear if these actually hold and it is quite inconvenient to run for to apply for For crypt analysis or for somebody who just wants to set the parameters of this crypto system And and so we are trying to improve on the situation. What else do we have? We have on the other end we have the slide reduction algorithm That is not as widely known as BKZ and not as widely used and that is mostly because it's practical performance is believed to be quite bad on The other hand it's asymptotic perform performance is as good as it gets at this point for lettuce reduction So it is asymptotically the best algorithm we have and also it's It's asymptotic analysis is actually really clean. It's a clean analysis. It's a nice generalization of the LLL analysis and Out of this clean analysis actually also a very simple average case prediction falls just out of the bottom so Unfortunately, though Yeah, it is not used a lot because it's practical performance. It's not very good And so natural question for us was obviously can we get the best of both worlds? And our answer is the self-dual version of the BKZ algorithm We want to our version of the algorithm to be performant as well or just as performant as because a so we might model it after the BKZ algorithm and Using experimental evaluation. We can show that it is just as performant as or comparable to BKZ But then we sort of like try to sidestep the issues that BKZ has with this asymptotic analysis and its average case prediction and so Using these sidesteps and using maybe slack borrowing a little bit of the techniques from slight reduction We actually get an algorithm that is asymptotically just as good as slight reduction and we have a very simple average case predictions meaning that an adversary can just And kryptonites can just Use a closed formula to predict what comes out of this algorithm. So You don't have to run a simulator anymore You can just yeah ask for a block size and we can say what roughly was gonna come out of so, why do I say that Why do I say that slight reduction behaves so badly well It's only based on one experimental study that has been done so far that included slight reduction And that is one by a garment Gwen in 2008 who actually also designed slight reduction and here so on the x-axis you see the block size parameter of the Reduction algorithm, which is sort of like a measure of the time spent on it. So of the runtime and here you see a normalized measure of the of the shortest vector found by the algorithm and So the law is better and as you can see BKZ clearly outperform slight reduction In this in their experiments and that is why no one has used slight reductions of the since then anymore And it is sort of like it was more considered as a theoretical algorithm but since we Were able so we borrow techniques from slight reduction And we also have to implement our algorithm and then we had all the tools ready to also implement slight reduction relatively easily So we included it in our experimental study and for their experimental study Here's what we found the x-axis is the same again block size a measure of the runtime of the algorithm horizontal Vertical axis is a room hermite factor against the normalized measure of the shortest vector found and yes And rather small block sizes up to around 50 Which was the limit of the previous study as I just showed you So BKZ and do BKZ clearly outperform slight reduction, but as you increase the block size actually Slide reduction becomes quite competitive as well And as our study shows our dual BKZ algorithm performs comparable to to BKZ So that's on the that's our result on the experimental site We also have another technical contribution that should be of or we hope to be of independent interest For this. Oh, yeah, one more thing All our code and data is available online. So if you want to run your own analysis on the data or want to add more Experiments or you want to play around with the code? Please feel free So and then as a at the third contribution recall that What BKZ does over and over it looks at subject at projected sub lattices of the Of the of the lattice or of the lattice basis and it applies this one operation over and over right it applies this This SCP reduction where we take the first The first vector of a lattice basis and we make it as short as possible And because the sum over these is a it's a lattice invariant all the other ones get get larger And so this is how this mess is shaft shifted from the left to the right and it tries to make this the shape of Of a bit of a basis as horizontal as possible By just running this over and over I go a bit more into detail later But for now, we actually have another tool in the toolbox for lattice reduction And that is you that is known as dual SCP reduction and what you do there instead of Minimizing the first vector in this in this shape what you do is you try to maximize the last one It is not quite obvious how to do that. But what you do is you transition Into the Into the dual. Well, yeah a few words. Yeah If you maximize the last one again the sum is lettuce constant stays the same And so the other ones get shorter and again you're moving Mass to the to the right and you're trying to get this this whole thing more horizontal And how you how you achieve that is you transition to the dual Of of a lattice and you compute the shortest vector there and you insert it in the dual and then you go back into the primal But unfortunately computing the dual of a of a lattice basis is actually It's asymptotically dominated by the svp oracle step But in practice it is it involves matrix inversion So it involves the factor of at least n2 to 3 which Yeah, it's kind of annoying, which is also maybe a reason why slight reduction wasn't implemented before It uses dual svp reaction But what we come up with is an algorithm that actually allows you to Compute this reduction of a of a basis without ever computing the dual of a basis. You never have to do this This dual step and or this this dual computation And the dual enumeration so you're implicitly running an enumeration in the dual without ever computing the dual And if you look at the algorithm, I don't want you to understand this right now I just want to show you this is a This is a dual enumeration. This is a prime enumeration as you would Like yeah as you would implement it And they're structurally just really really similar. This is all I want to show with this with this picture As you have a few times you you run this loop in a different order Um, you have sometimes you have a binders where you have a plus here You divide here where you multiply here But essentially structurally they're both the same the same and it turns out they're also as you would expect from such Algorithms, they're also just as efficient. So the This is the rate of enumeration in several dimensions Um, so the higher the better, but the rate of enumeration this is in times 10 to the seven nodes per second It's essentially the same in the dual and the primal and also Adding this to like if you have an implementation of an of a primary enumeration adding a dual enumeration It's actually not very hard because you can just adapt your implementation really easily And that's what we did in order to implement it We added it to fpl l l and this part is actually No part of the main branch of fpl l l. We are also working on it to get it Get our reduction algorithms included, but this is already part of it if you want to play with one with it feel free all right And this is our results now. I'm going to start going a little bit more into To the details and first cover some preliminary that Yoshinori didn't cover For ones, we are going to have to talk about the dual lettuce a little bit The dual lettuce is defined by all the set of Vectors in the span of the lettuce such that the scalar products with any lettuce vector is integer And there's also a notion of a of a dual basis. So the first one is It's a as a relationship between lettuces now We also have a relationship between bases. So as you know lettuce can have infinitely many bases but for any basis if you Compute the basis d such that they have the same span and they are sort of like the inverse of each other in some sense This means That d actually generates the dual of b. So they're actually generating dual lettuces And Uh, we also know something about their the relationship between the grime schmidt authorizations we called bi stars as a So orthogonalize vectors the length of the orthogonalize vectors And their relationship to the duals is also given by by this formula So their length is one over the length of di star or star di To the minus one Why do I write star di well it turns out the right way to look at The authorization of the dual is to Authorize it from right to left rather from then from left to right. So you start from the last Last vector you authorize everything before that And then you just keep going and then if you do it that way then the gso are nicely related as well and in particular we have That b and star if you substitute an i for n for i then b and star is just the length of the Last vector in the in the dual basis and because we are authorizing the dual basis from right to left This is just a This is just a dual vector No authorization whatsoever. And so in order to maximize this We minimize this dn and so this is where just going into the dual comes from and svp reducing the dual This allows you to maximize the b and star. This is where it just comes from And so a couple of more things we need we're almost done with the plenaries, but we need a few more equations and in particular we want to Talk about some bounds that we have on scp that are that are useful in the analysis Obviously, we have a minkowski's bound which tells us that for any lattice The shortest vector lambda one is the first minimum It's the short it's the length of the shortest non-zero vector in the lattice And this is always smaller than or of square root n times the root determinant of the lattice And that is independent of the basis by the way. That's also interesting, but If you take if you replace this Lambda one with the actual with the actual vector that you have in the in the first Yeah, the first vector you have in the gs or after you svp reduce Your basis and you take the log of this equation. This is the equation that you get out of this also the Determinant is just It's just the log of the determinant is just a sum over the shape of the basis or the log of the other bi stars So you get this if you take the log of this equation, you get this a fine inequality that you can exploit as we'll later see in Where we're going to model these these log bi stars as As variables and so the output Or like yeah, this output variable is going to be an affine combination of input variables Furthermore, we have the gaussian heuristic which looks very similar It actually in this notation it pretty much looks the same the gaussian heuristic says well for random lattices So there's like there's an ocean of random lattices that where you can prove this heuristic actually rigorously, too But in this case, uh, the first minimum is actually quite close to order of square root n times the root determinant of the lattice Um But for constants that we actually like we can compute these constants and So we actually get a some something that's somewhat stronger because we get a rough Equality here rather than an inequality and we can do the same trick where we take the log and we get an Yeah, we get a rough equality again an affine one However, I put a star there for a reason And it has been shown that there's a caveat with their previous experimental studies I showed that in the context of block reduction um This uh, this gaussian heuristic is only accurate if you increase the block size large enough or for large enough blocks So only when the block size grows beyond 50 or something. Uh, that's when the gaussian heuristic becomes accurate Below that it's actually horribly inaccurate And that is actually where a lot of the problems for bkz stem from so as we will see now Let's have a brief look again at bkz bkz As I mentioned before it takes this this svp Um operation where it tries to minimize the first vector and it applies this To the first to the first block of the basis for the first sub lattice The block size parameter is better um And it then just keeps doing that all the way to the left until It reaches the end And when it reaches the end we are kind of stuck right the the windows at the at the right We can't move any further, but what bkz does is it just simply makes the block smaller and smaller and so at some point um Yeah, so and when we are when we're done when the block size has size one Well, then the problem is solved trivially and then we start over this was a tour of bkz as uh, yoshinori already already mentioned um And uh bkz just does it over and over again. And so how would you analyze this well the state of the art analysis? um Looks again at uh, well, what happens What happens in this step? Well Let's say we we model all these these bars These are our log bi stars. Let's model these as uh variables and let's say we know what's what's coming into the algorithm Then using minkowski's Minkowski's theorem. So there's this inequality here we can we can write the output as in a fine equation of of the input And in the next step we can do the same thing again And we do that over and over again. And what you can do is what you can show Is that you can set up this um this dynamical system and then You get for if you have upper bounds for the input then you get an upper bound for the output if you apply this Dynamical system once and now you can use the techniques from the dynamic dynamical system analysis um and prove And look at the convergence of the system as it's a fixed point the fixed point is going to give you Uh an upper bound on what comes out of this algorithm and the convergence is going to give you an upper bound on the runtime And this was done by aran So you're in stile. That was a very nice paper. That was applied to bkz and now the hope is that if you replace the minkowski's bound by the gaussian heuristic that you would get a very easy um average case analysis out of it, but unfortunately this is not true because remember We have these uh smaller the smaller blocks in the end and here the gaussian heuristic doesn't hold and so unfortunately Just can't you can't just take the the analysis in this paper translated into a I'm translated into an um average case heuristic analysis But this is essentially what the bkz simulator does by the way It is trying to do that and in the end it has to resort to some sort of unproven assumptions About what's the shape of of the tail is going to look like And this is where a lot of the problems for bkz come from in the average case prediction And this is why you need a simulator and can't just use math and a closed formula and so How does this uh, how does it differ to to self-dual bkz? Well, we said we want to retain its um Okay, we want to retain its practical performance and so What we do we just start out the same way as bkz as we take the first block We um as a period uses and we shift to the right But once we get to the end We have this last block and there's uh what bkz made smaller and smaller block sizes and reduce them And that's where all the trouble comes from we say well, we don't do that Instead in the end we apply a dual as a period action We leave the block size that is but we apply the dual as a period action and now we just move the uh This doesn't work We just move the block in the other direction back And then once we get to the other side again, um We'll start doing svp reduction again. And so you have this window of size better. That just moves back and forth on this basis um and um We mainly do that because we want to get rid of this small tail or of this tail of bkz And you might think well now this looks more complicated than bkz So the analysis uh might be more complicated, but as it turns out You can view this as also doing one tour all the time Just you do a bkz tour and then instead of doing another one what you do first is you Uh compute the dual of the basis compute its reverse reverse all the variables Uh and then you start over so what you're doing is you're running bkz Go into the dual reverse the basis and start over to bkz again reverse computer dual et cetera And so this actually is a much simpler system than uh, what the bkz system looks like um and because we don't have smaller blocks we can actually simply plug in the gaussian heuristic and um and get an average case analysis out of this for free essentially and One more thing that's very interesting about this is um It came up in the previous talk that often about lettuce reduction There's an additional assumption being made that the output Of um lettuce reduction Follows a straight line which is known as a gram schmidt No, the geometric series assumption, sorry So schnoz gsa says that the output like this would actually look like a straight line at the end of the lettuce reaction Or are roughly like a straight line well if you look at the fixed point of of our system and And it actually Proves that this is true for under the gaussian heuristic. So for our algorithm actually the gaussian heuristic implies the gsa Module is a little window at the end. So for all the other For all the other vectors the gsa actually holds true if the gaussian heuristic is true And so this is this is not known. This was not known about bkz or not known about bkz But for a third dual because we don't have to do an additional assumption um I was going to talk a little bit about the dual enumeration, but uh, I'm out of time and I refer you to the paper for them Thank you very much Any questions? Hi, thank you for the talk. Uh, just a quick question when you compared, uh, the performance of bkz to 12 bkz Was it the full bkz or the truncated one? Uh, it was truncated So, yeah, we went up to block size 80 at like at some point you can't Uh, run it. Uh, it's a full bkz. You have to do an early termination. That was a question right there And so that's what we did, but that's also what we did for dual bkz So the the terminating condition in in the self dual bkz It's also a little tricky because you're always changing the basis because you're always Computing the dual and so, uh, you have to be a little bit careful with that, but um, Yeah, and I mean like the dynamic systems analysis shows that, um After a few rounds, you don't have to run it anymore. You you can terminate Also questions, okay, let's thank the speaker again