 So, when I do a swap step, the k minus first volume, just that one sub lattice volume gets reduced by a factor of square root of 3 over 2, and all of the other sub lattices are the same, because I've only swapped two vectors. And so that means that the complexity of the entire new basis is square root of 3 over 2, well less than or equal to square root of 3 over 2 times the complexity of the old basis. Okay? So, messy algebra, but the idea, actually this is usually the case. If you see a page of messy algebra, there's usually an idea there, but then working it out takes a lot of steps. Okay. So, is there a question in the back? You're going to have to yell because otherwise I'm not going to be able to hear. I was wondering, why do we need the low bash condition there? We don't need the low bash condition for that step. We need it for this step, I think. Yeah. So, how good is the basis that we get? And the answer is that, as I said, the shortest vector that comes out of L cubed is no more than, well, roughly 2 to the n over 2 times longer than the actual shortest vector. And the product of the vectors that come out, again, there's this exponential factor times the determinant. So, remember, Hadamard's inequality says that there's an inequality in the other direction without this thing. And we're always trying to find vectors where Hadamard is pretty closely true, as close to true as possible. So, this is how close we can get with our basis. Okay. I think I'm going to sort of skip over this. How many people have had enough messy algebra for today? I want to see lots of hands. Anyway, so this is in the notes. Let me skip over this. It's a whole page of messy algebra. I guess that was all I was going to do. All right. I guess we can look at it. Actually, no, you know, I think I've had enough messy algebra for today. I'm going to stop here and ask for if there are other questions or stuff about this. Yeah. You mentioned that finding a basis that is both the size and the positive and the positive and the negative which is really practical because, like, you know, we don't have much better of the actual basis than what we get on. Could you repeat the question? Yeah. So, you'll get a basis where that 2 to the n over 2 factor is, I think, is replaced by something that's polynomial and n. Yeah. I don't know if you'll get all the way. You won't get all the way. Actually, I don't know. Maybe Hendrick knows. You won't get. I don't think you'll get all the way down to sort of square root of n, which is what sort of is best possible. But I'm pretty sure you will get polynomial and n. Yeah. It would be great to have a fast algorithm to actually get those bases. Yeah? Yeah. In the beginning, when you were only working with two vectors, we always do, like, we projected, I think, the longer vector onto the shorter vector. So what would happen if we do the longer vector? Would we still need the best shorter vector? If you project the shorter vector onto the longer vector. Yeah, that'll actually tend to work worse rather than better. In fact, the Gram-Schmidt factor that you get is, when you round it to the closest integer, it's almost certainly going to be zero. Yeah. That's a good question. Yeah. It's not clear what minor modifications might yield good stuff. So yeah, that's a good thing. I mean, usually for something like L cubed or BKZ L cubed, you'll usually do best, I think, if you feed the vectors in shortest to longest. A really fun thing to experiment with is do that and then do it longest to shortest and see what L cubed does. Of course, I mean, L cubed will start, it should during the process. And if you implement it, you can probably watch it move the longer vectors further in and swap the shorter vectors down. Yeah. Yeah. Yeah. If the lattice has an orthogonal basis, the Gram-Schmidt is not guaranteed to move if it's diagonal, but if the lattice is not guaranteed to move, it's not guaranteed to move the longer vectors. I'm sorry. You're saying if the lattice actually does have an orthogonal basis, though, you don't happen to know it? Yeah, that's a good question. I think it should. I've never really thought about it. That's a great question. That's the kind of thing one could do some experiments to see whether it's likely to be true. If that gets true, then try to prove it. Yeah. Right. Yeah, no, no, no. If you just do Gram-Schmidt, even if there's no orthogonal basis, you're right, it's unlikely to find it for you because you're stuck with the first vector. But because L cubed does the swapping back and forth, it's going to move things down. It probably will get you most of the time. I don't know. I was just wondering if there was a permutation for the basis that would minimize the swapping that you'd have to do? Good question. So the question was, is there a permutation of the basis that minimizes the swapping that one has to do? And let's see, I just repeated the question. The answer is I don't think... Well, I mean, certainly if you tried all the permutations, one of them would work better than the others, probably. But I don't think there's any way to predict ahead of time. As I said, I think most generally feeding them in shortest to longest is going to generally do better. But you see, most of these algorithms, there's sort of average case running time and worst case running time. So I said an average case output versus worst case output. And L cubed, if you feed in or set it up really carefully, you can make it take time n squared and you can get output that is 2 to the n over 2 worse than the... But for most vectors, you'll do somewhat better. Yeah. I was also... I did do better than that. I set the time out to like five minutes instead of two minutes, but okay. Let me go all the way back to the algorithm. Yeah, I just also wanted to mention the BKZ, the block KZ version. It's almost the same thing here except we also have this beta parameter. And it's not quite the low-vash condition, but sort of something like it. What you do is you take beta vectors, a string of beta vectors, and you do this block reduction to get a really good basis for the things spanned by those vectors, and then you're incrementing and decrement. So essentially, interior to here, there's a lattice reduction on a beta-dimensional lattice where you're actually doing something that's exponential in beta. That's the amount of time it takes. But it improves the output, okay? So it's not that much more complicated. I've never actually tried implementing BKZ myself, yeah. So is this the algorithm that's supposed to take quadratic time? Is it supposed to what? That's supposed to take quadratic time? This takes quadratic time, yeah. I think we've proven, right, that it takes quadratic number of loops, but step three doesn't seem like it takes constant time. Oh, yes, no, step three is, oh, yeah. So step, I guess you're right. Step three in principle is you're doing Gram-Schmidt. If you do it inefficiently, it'll take type n squared, probably. If you do it efficiently, storing the previous ones that you have, it's certainly just time n. Yeah, there probably is another. Okay, you caught me. Maybe it's n cubed. Well, I think it's still n squared, but you have to do step three a bit cleverly. But you have to be cleverer implementing that, yeah. Yeah, what I actually showed you a proof of is that when you do this algorithm, this swap step gets done no more than n squared times. And then you have to figure out sort of how long it takes to do the other steps, yeah. And this version is extremely inefficient. If you implement it that way, it's probably o of n cubed or o of n to the fourth even. But again, if you just want to play with it, you know, for n equals 20, it doesn't take very much time. Yeah, maybe one more question and then we're out of time. Suppose you had, like, more than over the ring of integers, and you do this as a lattice. Can you cook off an LLL5? I'm sorry, for ring of integers, if you're trying to find a basis for the ring of integers. So say you have, like, a lattice over the ring of integers. A lattice. Oh, okay. So another way to describe these lattices is they're Z-modules. And the question is what happens if instead I take a ring of integers and a real quadratic field, a real quadratic field, say, for simplicity or something, and look at a module over that ring. It doesn't quite work directly. I mean, you can reduce that to looking at a Z-lattice. I don't know to what extent you can exploit the OK structure to speed things up. Certainly not exponential type speed-ups. Maybe there's some improvement. But it's an interesting question. I'm afraid the answer is I don't know. Okay, maybe we can leave it there. And if there are further questions, you can ask Professor Silverman at the GSS problem session, which is starting in about eight minutes. About ten, yeah. Okay. Okay. Thank you.