 One important thing we can do with a set of basis vectors is what's called Gram-Schmidt reduction. And that's centered around the following problem. Find an orthogonal basis for some vector space. So suppose we have a basis for our vector space. We can take any vector v1 in our set v as the first vector in our new basis. But what about the rest? And so we might think about the problem a little bit. And the solution we might come to is the following. We'll go through our set v vector by vector, and replace any vector vi with vi perpendicular, which is perpendicular or orthogonal to all of the preceding vectors, but still part of a basis for our vector space. So suppose I have a basis for our vector space v. Let vk tilde be some linear combination of the basis vectors where ak is not equal to zero. And we want to prove, or potentially disprove, v prime. The set of vectors formed when we replace vk with v tilde k is still a basis for our vector space v. So we want to show that v prime, which is v where vk has been replaced by v tilde k, is still a basis for our original vector space. And so that means we need to show that the vectors in v prime are independent and that they span v. So let's prove independence first. So we can prove, or disprove, that a set of vectors is independent by determining whether any linear combination equal to zero must have all zero coefficients. So suppose I have a solution to linear combination of the v prime vectors equal to zero. And remember, that's going to be our original set of vectors, except instead of vk, we use v tilde k. Now, I don't know what to do next, but we do know that v tilde k is equal to something. So why not? Let's substitute that into our equation and collect our coefficients and simplify our expression. So one useful observation here is that we went from a linear combination of the v prime vectors to a linear combination of the v vectors. And so what do we know about the v vectors? Well, we know that they form an independent set of vectors, and that means the only solution to linear combination equal to zero is to have every one of the coefficients equal to zero. And so x1 plus xk a1 is zero, x2 plus xk a2 is zero, and so on, including xk, ak has to be zero, and so on. Now remember, our underlying assumption is that ak is not equal to zero, and so that tells us that xk is equal to zero. But if xk is equal to zero, then x1 is equal to zero, x2 is equal to zero, and xi is equal to zero for all i. And that means if x1, x2, and so on is a solution to linear combination equal to zero, then all of the xi's have to be zero, and so our only solution to linear combination equal to zero is the trivial solution, and so v prime is a set of independent vectors. How about showing that they span the same space? Well first, the easy direction, if x is in the span of v prime, then I know that x is a linear combination of the vectors in v prime, including that v tilde k vector. But because I know what v tilde k is in terms of our vectors v1 through vn, I can replace it and collect the like terms, and that tells me that x is a linear combination of the vectors in v. And so x will be in the span of v. On the other hand, because our v tilde k vector is a linear combination of the vectors in v, where ak is not equal to zero, we can solve this for vector vk, and if x is in the span of v, then x is a linear combination of the vectors in v, and I can substitute my vk, and so x is a linear combination of the vectors in v prime, and so x will be an element of the span of v prime. And so that tells me that the span of v and the span of v prime are the same. This problem leads to an important result. So what we had was we had some basis for v, and we replaced one of the vectors with a linear combination of itself and the other vectors, and we proved that the new set of vectors is still a basis. And so let v be a set of vectors where the span of v is equal to some vector space. If v prime is formed by replacing some vector vk with any non-trivial linear combination of the other vectors where ak is not equal to zero, v prime is also a basis for our vector space. And this means that we can replace any vector with any linear combination of that vector and the others. So let's proceed as follows. Suppose our set of vectors is v1, v2, and so on. Let's keep vector v1, and we'll designate v1 perpendicular to be our v1 vector. We'll replace v2 with a linear combination of v2 and the others. And the coefficient of v2 should be one. We'll loosen that requirement a little bit later on, but for right now we need to make sure that it's at least non-zero. And the coefficients of the other vectors could be anything, but since the only vector we know we're keeping is v1 perpendicular, which is our first vector, we'll make the coefficient of v1 perpendicular the unknown, and we'll set all the other coefficients equal to zero. And so I have this equation v2 perpendicular is x1 v1 perpendicular plus v2. If we can solve this equation so that v2 perpendicular is orthogonal to v1 perpendicular, we have a second vector that we can keep. So let's set up this equation. By assumption, v1 perpendicular will be orthogonal to v2 perpendicular, so the dot product will be zero. So I'll form the dot product, and on the left, that dot product is zero, and on the right I have a sequence of dot products that we can solve for x1. And now we have two vectors we want to keep, v1 perpendicular and v2 perpendicular. So we lather, rinse, repeat. We'll replace the original v3 with v3 perpendicular, a linear combination of the two vectors we want to keep, v1 perpendicular and v2 perpendicular, and the vector we want to replace, v3. Since there are two unknowns, we'll need two equations. So first we'll find the dot product with v1 perpendicular. But remember, v1 perpendicular and v2 perpendicular are assumed to be orthogonal, so this dot product will be zero. And so this gives us an equation we can use. And if v3 perpendicular is orthogonal to v2 perpendicular, then we also know this dot product. And that'll give us a second equation, and we can solve our system of equations for x1 and x2. And this leads to the process known as Gram-Schmidt orthogonalization. Given a set of basis vectors for a vector space, we can form an orthogonal basis as follows. We'll take our first vector in the orthogonal basis to be our first vector in our original basis. We'll let our next vector be a linear combination of the vector we're going to replace and the other vectors. We'll solve, and we'll lather, rinse, repeat, successively replacing every vector in our original basis with a new vector. And we can express this result as a formula, but we won't. Because it's important to keep in mind, don't memorize formulas, understand ideas.