 In this final video for lecture 14, we're going to solve another system of linear equations using the method of Gauss-Jordan elimination. Now, of course, things are going to look a little bit different this time. But that's actually the point of doing this next example. I'm going to illustrate some differences that we should be aware of as we work with these augmented matrix. Now, the first thing is, given the system of linear equations right here, you have the first equation y minus 4z equals 8, 2x minus 3y plus 2z equals 1, and finally 5x minus 8y plus 7z is equal to 1. The very first thing you're going to do here is write this as an augmented matrix. This is something we can mostly handle just fine. We're going to get a 0 in the first position because there's no x in that first equation. 1 minus 4, draw your line, and then you're going to get an 8 for the first row there. Then you're going to get 2, 2, negative 3, positive 2, and 1, and then lastly 5, negative 8, 7, and 1. This is our augmented matrix. The first column that's non-zero, which will be our pivot column, it gets a pivot position in the 1, 1 spot. Now, there is a 0 in that pivot position, so we've got to get something non-zero in there. We have a couple of options in how to do that. One option you could do is you could interchange just the second and third rows, and a second and first row, excuse me. If you do that, you'll get 2, negative 3, 2, 1. Then this becomes your next row, 0, 1, negative 4, 8, 5, negative 8, 7, and 1. For purposes of Gaussian elimination, this is perfectly fine here. But be aware that trying to get a 1 in that pivot position right there is going to be a little bit problematic because if you divide everything by 2, you divide row 1 by 2 there, you're going to end up with, in your first column, 1, negative 3, 1, and then 1 half. Some of those are whole numbers, but some of them are fractions, so you should start trying to combine things together. There's a negative 8 down here that's going to have to combine with that negative 3 halves, and then there's a 1 down here that's going to have to combine with the 1 half because we want to get rid of this 5 after all. While we can do that, that's going to introduce fractions here early in the problem, which could lead to complications later on. Nothing mathematically incorrect, but it might lead to something erythmetically more difficult. We might make the computations more challenging than necessary. Conversely, what was the other option? Do we instead interchange rows 1 and 3? We pull a 5 up there because then we're going to get this 5, negative 8, 7, and 1 in the top, and then you're going to get 2, negative 3, 2, and 1, and then of course this thing sits here, 1, 0, 1, negative 4, and 8. Your pivot position is still the 1, 1 spot, but you have to sort of like the same problem. If you divide everything by 5, you're going to end up with 1, which is what you want, but then you're going to get negative 8 fifths, you're going to get 7 fifths, and then you're going to get 1 fifth, which honestly made those halves that we saw earlier look more promising, right? So it's like, well, I can't really divide any of them to get a 1 in that position without introducing fractions. I don't really want to do that. So it turns out there's actually another thing you can do. Now, this actually does break from the proper Gauss elimination technique, but for students in a class like Math 1050, this slight modification typically is well received. So what we're going to do is we're going to interchange rows 1 and 3, like we said before, because you have a 0 in the pivot position, you do have to get something non-zero in there. So I'm going to grab the 5. The reason for that will be presented in just a second. Okay, so the first row comes up, the third row becomes the first row, the first row becomes the third row, and then we didn't do anything with the second row, so we just copy it down. Our pivot position is still the 1-1 spot. So what are we trying to do here? I usually like to put those in blue, okay? So what we want to do is we want to get a 1 in that position. Scaling can be done, but it gets us some fractions. What we can do instead is use row replacement. Notice the following. If I take row 1, okay, and I replace row 1 with row 1 minus 2 times row 2, this will actually give us exactly what we want, and why is that? Well, notice what happens here. If I take 2 times negative 2, that's going to give me negative 4, and pause there for a moment and think about that. 5 minus 4 is equal to 1. It turns out that scaling is not the only way to get a 1 in there. Row replacement itself, if you want to play around with combining the numbers, you can't, because in this case, 5 minus 2 times negative 2 gives me a 1. This can sometimes produce the 1 in a manner that doesn't involve the fractions whatsoever. And again, when this happens, the peasants rejoice and life goes on. The piece is restored in the kingdom. We don't like to do fractions if we can avoid it. This is a nice little maneuver that you can do here. Now, we have to do this for the whole row, right? 2 times negative 2 is 4. If I take negative 3 times a by negative 2, that's going to give me positive 6 that goes right there. 2 times negative 2 is again 4, and then 1 times negative 2 is 2. So we have to pay the price. Magic comes with the price here. And that price is we have to do the row replacement for the whole row. But that's a price we're probably willing to make here that you're going to get a 1 there, 5 minus 4. Then you're going to get negative 8 plus 6, which is a negative 2. You're going to get 7 minus 4, which is a positive 3. And then 1 minus 2 is a negative 1. So we didn't do anything else to the other rows, just the row replacement. So using row replacement can help you get a 1 in that pivot position. Now, I said earlier, this is a deviation from Gauss-Jordan elimination. If you're actually following along with the notes for this exercise, I actually will finish this problem using the fractional approach we did earlier. That is, you interchange rows 1 and 2. You divide everything in row 1 by 2. This gives us fractions. You start doing some row replacements. That's how I proceed forward from there. I actually follow Gauss-Jordan to a T. Now, like I said, we like to avoid fractions because we're humans. Computers don't complain unless we could program them to do it, which generally is not a good thing to do. But we as humans can sometimes struggle with the fraction. So this is a nice little maneuver that can help us out. Now, if you're working with really large matrices, we are actually slowing down the algorithm. And that is a cost we might have. That, again, isn't much of a problem for the computer. But we are only working with like 3 by 3 or 4 by 4 systems. This slight slowing down to avoid the fractions, this detour, is generally well-received. And so that's how we're going to finish this problem here. Now that we have a 1 in the pivot position, we can get rid of the 2 below it very nicely. So what we want to do is replace row 2 with row 2 minus 2 times row 1. So we're going to get a minus 2 right here, a positive 4 there, a minus 6 here, and a positive 2 like so. Since there's already a 0 in the 3-1 position right here, we don't have to do anything with that one. So copy down the first row, unchanged. With the second row, we're going to get a 0 here. Negative 3 plus 4 is 1. That's kind of fortuitous, because that number is going to be my next pivot position. So I got a 1 there by happenstance. That's really nice. 2 minus 6 is negative 4. And then lastly, 1 plus 2 is a 3. And then copy down the other row here. 0, 1, negative 4, and 8. So now that my first column is completely row reduced, I'm going to move to the second column, which then there was a 1 already in my pivot position. So I don't have to worry about interchanging or scaling. I like the one that's there as there is. So what we can do is start row replacing. That is, yeah, do row replacement to get 0s here and here. To get a 0 instead of that negative 2, we're going to take row 1 and add to it 2 times row 2. So we get 2. We get negative 8. And then we get positive 6. I'm just going to start recording the next matrix over here. So 1, 0, we're going to get a negative 5. We're going to get a positive 5, like so nothing's changing with the second row. So we get 0, 1, negative 4, and 3. Now for the third row, we want to get rid of a 1 right here, for which case we can do that by replacing row 3 with row 3 minus row 2. So we're going to get a minus 1 right here, a plus 4 right here, and then a minus 3 like so. And as you start combining these together, you're going to notice something here. You have a 0. You're going to get 1 minus 1, which is a 0. That's what we expected. But next you're going to get a negative 4 plus 4, which that we were not looking for. So we've got a 0, 0, 0, and then on the right-hand side, 8 minus 3 is going to give us a 5 like so. So I want you to analyze this matrix right here. This matrix you ended up with your pivot positions in the 1 and second column. There is no pivot in the third column because the pivot position should be here, but there's a 0 there and there's no other rows to get something non-zero from. So it actually turns out there's no pivot in that last column. I want you to look at the coefficient matrix. You have a row of 0s. So the left-hand side of this equation looks like 0. 0 equals what? 0 equals 5. This is a contradiction. 0 does not equal 5. I just checked the book of almanacs of numbers there. And sure enough, 0 is not equal to 5. Because we get this contradiction, it turns out that this is an example of the inconsistent case. We've seen this before. And as you were working through row reducing a matrix, this will happen sometimes, that there is no solution to the problem. And you recognize that as you start row reducing. Now it turns out, as we were row reducing this matrix, row reducing this column right here, it didn't help reveal it at all. It honestly came from row reducing this last row right here. And so when one looks at the proper Gauss-Jordan elimination technique, it turns out they actually don't look for pivots above and below. Sorry, they don't look for 0s above and below the pivot. The proper Gauss-Jordan elimination actually looks for only pivots below the, sorry, looks for 0s below the pivots, ignoring any numbers above. It'll come back to it later. With the Gauss-Jordan elimination, there's actually what we call the forward phase and the backwards phase. In the forward phase, you are constructing your pivots and you only construct 0s below the pivots. In the backwards phase, then you start putting 0s above the pivots. Think of the names because the forward phase, you move left to right, putting 0s below the pivots. But in the backwards phase, you move right to left and you start then putting 0s above the pivots. Now, one reason why the Gauss-Jordan elimination does that, now, when we learned about it in this lesson, I kind of skipped over. I actually combined the forward phase and the backwards phase. For a math 1050 student, that's sufficient. You don't need the distinction there. Again, that only starts to become really profitable when you start looking at bigger and bigger linear systems that we will not see in this lecture course. Of course, if you wanted to learn more about that, you should take a look into a class like linear algebra, which we then would play with this nuance about forward phase versus backward phase. That distinction becomes extremely critical in that situation. But one advantage of doing the forward phase first is that by ignoring the positions above the pivots and only focusing on the positions below, we actually would find this row of 0s very quickly, ignoring many of the other copulations, because once you find this row of 0s, that tells you something. In this case, he told us that we had no solution because it was inconsistent. Now, conversely, I'm going to do a separate problem. But let's imagine that our final matrix looked a lot like the matrix we had. 1, 0, negative 5, 5, 0, 1, negative 4, and 3. So suppose all that was the same, and then you have this row of 0s, but suppose the last number turned out to be 0. So if that was a little bit different, I'm going to hide the previous answer, so there's no confusion there. If our matrix had instead simplified the following way, we still have the same pivot positions. In fact, the coefficient matrix is in row reduced echelon form. It's the exact same echelon form before, but we had this instead. When you look at the bottom row, this now gives us the equation 0 equals 0. That's not a contradiction. That's actually universally true. No concern with that whatsoever. As a system of equations, this actually would tell us that x minus 5z is equal to 5. The second equation then tells you y minus 4z is equal to 3. The third equation tells you 0 equals 0, which again is like saying the sky is blue. It's not really, it's not restricting the solution whatsoever. It's not so helpful, so you can just remove it from consideration. We don't need that row. Now we have an equation with three unknowns, three unknowns, two equations, but notice that x and y could be expressed in terms of z. If you solve the first equation for x, you're going to get x is equal to 5 plus 5z, and then the second equation would be y equals 3 plus 4z, like so. If you treat z like a free variable, then we get the following general solution. Let's say that z is equal to some number t. It's just some unspecified real number there. Then you're going to get your general solution, x is 5 plus 5t, y equals 3 plus 4t, and then z was t itself, like so. This would be our general solution in that situation. What I want to illustrate here is that as your row reduced to your matrix, if you do get a row of zeros, you do need to check. Do I get a contradiction? If you get a contradiction, it means there is, no solutions in consistent case, but if you modify it so you have a row of zeros that's equal to zero, it means you can disregard that row, and then most likely, since you lost a row, that means you essentially lost an equation. You have more variables than equations, and in that situation, you're going to have free variables, and therefore you can determine the general solution using those free variables and those dependent variables. This illustrates all the possibilities you can see when you're working with linear systems and solving them with these augmented matrices that is solving and using the Gauss-Jordan elimination technique. It turns out that while it has a high learning curve compared to some of the other techniques we've learned to solve systems of linear equations, this is a highly effective technique that's extremely efficient, extremely fast, and it produces the correct answer, which, again, when a student masters it with very, very little error, and I do highly recommend as you work through linear systems to practice this technique of Gauss-Jordan elimination, it'll prove fruitful for you moving forward. And with that said, that does bring us to the end of lecture 14, and it also brings us to the end of discussing systems of linear equations. We'll revisit the topic of systems of equations later on in the semester, but in those cases, there'll be systems of nonlinear equations, but that's something to look forward to in the future. But like I said, that brings us to the end of lecture 14 right now, thanks for watching. If you've learned anything about augmented matrices, linear systems, Gauss-Jordan elimination, any of that stuff, please like these videos, subscribe to the channel to see more videos like this in the future. If you have friends or colleagues who might be interested, share these videos with them, I'd be glad to have them watch them too. And as always, if you have any questions, feel free to post them in the comments below, and I'll be glad to answer them as soon as I can.