 Alright, so let's ask the question of, can I find a reasonably orthogonal basis for a set of lattice basis vectors? And so again, the security of a lattice cryptographic system relies on the fact that the public basis is in general going to be non-orthogonal. And again, a non-orthogonal public basis means that the revised algorithm yields very bad solutions to the closest vector problems. And what this means is that if I can somehow find a nearly orthogonal basis, I can break a lattice cryptosystem. Well, if I'm dealing in Rn, then given any set of basis vectors, then I can use the Gram-Schmidt reduction to produce an exactly orthogonal basis. Unfortunately, this doesn't do a lot of good, because I can't do this for lattice vectors, because in general what I end up with is a set of vectors that won't span the lattice because they're not going to have integer components. In fact, it's even possible that the lattice might not have any orthogonal basis. So trying to find an orthogonal basis for the lattice may be a pointless exercise. So what can we do? Well, we can apply what's known as Gauss's method. If there are just two basis vectors, then I can apply the following idea, and the general concept is pretty straightforward. What I'm going to do is I'm going to pick one of those vectors to be the first of our basis vectors, the first of our nearly orthogonal vectors, and the projection of the second basis vector onto our first basis vector is going to be some multiple of that vector, and the exact fraction is going to be determined by the dot product. That's our standard projection formula. Now, because I can only add a subtract integer multiples, what I'm going to do is I'm going to reduce that second basis vector by some integer multiple of my first basis vector, and this is going to be how we can indicate that we're going to round K to the nearest integer. It's kind of floor ceiling, and it's going to be K rounded to the nearest integer. I'm going to repeat this process, swapping the rule of two vectors, v1 and v2, until I can't do it anymore, until my vectors stay the same. For example, I have this lattice that's going to be spanned by two vectors, v120137, v2, 1648, 297, and I want to find a nearly orthogonal basis for that lattice. So I'm going to proceed by picking one of our vectors to be our first basis vector, and I'm going to project this vector, the second basis vector, onto the first, and that projection is going to be K20137, and the exact value of K is going to be determined by the quotient of the two dot products, and I'll evaluate that, and we're going to round this to the nearest whole number. Again, if I were to perform the Gram-Schmidt reduction process, I'd use this exact value, but I'm going to round that to the nearest whole number, where it's going to be about eight, and so that says that the projection of this vector onto this is, roughly speaking, eight of these vectors, and so if I subtract eight of these vectors, I end up with something that is nearly orthogonal. So I'll do that subtraction. I'll reduce my second basis vector by eight of the first basis vectors, and I get 41 as my second basis vector, and so now what I have is 20137, 41 as my two basis vectors. So now I'm going to switch roles, and I'm going to repeat the process. I'm going to try to use this vector to reduce this vector, so I'll do the same thing. The projection will be equal to the dot product, and so again, that's going to be rounded to the nearest integer. That's around five, and so I'll reduce this vector by subtracting five times this vector from it, and that gives me the new basis vector, 132, and so now I have the basis vectors, 132 and 41. So again, I'll try and reduce the two vectors. So I'm going to try and reduce this vector using this vector, and it'll evaluate that. That's going to be zero, which tells me that I'm actually done with the process. I cannot reduce these vectors any farther. Now this gives me my reasonably orthogonal basis, and you can verify that the dot product is reasonably close to zero, and it's worth noting that the sequence of subtractions we performed is actually reminiscent of the Euclidean algorithm. Again I started with my two original basis vectors. I subtracted one of these a bunch of times from the other one to get my first basis vector, and then I'm going to use this to reduce this vector. I subtracted a bunch of times and get a new basis vector, and then I'm going to use this to reduce the original vector, and in this case I could only subtract it zero times. The fact that I ended up with a zero here tells me I'm now done with the reduction. Well, you might say, yeah, that's great, and now what if I try this with a lot of span by three or more vectors? Well, the problem is that if I have three or more vectors, there's too many possible ways of switching around those basis vectors. And so what we want is some sort of systematic way of working through our basis vectors, and the other requirement is we want to make sure that this terminates whatever the algorithm is. An algorithm that doesn't terminate is not an algorithm. We want to make sure that the algorithm terminates after a finite number of steps. And finally, we'd like the algorithm to solve our problem and to give us a reasonably orthogonal basis for the lattice. If we're generous in how we interpret reasonably orthogonal, we can use what's known as the LLL, the Lenser-Lenter-Levage algorithm, and we'll take a look at that next time.