 Welcome everyone to our review for exam three for math 2270 linear algebra at Southern Utah University. If you're watching this video then it means you've probably made it to the end of the semester and so give yourself a pattern back and say whoopee and be excited that you're here. It's been a fun semester so far and let's not trip on the finish line and make sure we finish up this exam and do pretty well on that right. So I should point your attention to the instructions that exist for every exam. Make sure you do read through these and familiarize yourselves with the instructions and policies for this exam. They are the same as exams one and two but certainly things like the dates, the location, things like time limits. Sometimes these things do change from semester to semester so do make sure you check the information on canvas so you know exactly when the test is going to take place, where is it going to take place and how long do you have to complete the exam. Those things do change from semester to semester. This exam is going to contain 16 questions as you can see from this table. You will see seven questions coming from the multiple choice. They were with five points each. You will see three questions coming from the short response. Those are worth six points each and then there are six questions coming from the free response and they range from eight to seven points about the same. And then finally do remember to create and submit your no-card as that is worth two points on this exam. This test will cover the third arc of our textbook linear algebra done openly. This will contain chapter five about determinants, chapter six about eigenvalues and eigenvectors, and the vast majority of chapter four about orthogonality, particularly sections 4.3 up to the end of that chapter and then all of chapters five and chapter six, right? So if we were to write that down, so we're going to get section 4.3 to the end of the book, section 6.5 like that. On canvas, there is an exam syllabus that you can download that'll talk about a lot of these details as well. I recommend you do read through that as well. You can also find the link to the exam syllabus in the description of this YouTube video. And likewise, you can also find this document you see in front of you, this practice exam. It's available for download either through Canvas or you can download it directly from YouTube as well. So without further ado, let's jump immediately into the content here. Like we said, let's talk about our multiple choice questions. They're worth five points each. Please select what you believe to be the correct answer and only the correct answer. If there's no response, multiple responses are the incorrect responses selected. You don't get any points for these things. So make sure you select the correct response. And I don't really care how you do it. If you want to circle it, that's great. Checkmarks are great. Some people fill it in, whatever you're most comfortable with, just clearly indicate what the correct response is there on the paper. And there's only one selection that you should make. On question number one, we're going to see a question that involves the fundamental theorem of linear algebra, particularly if this is going to be placed on things like the nullity, the rank, the co rank, the co nullity of a matrix with some respect to these fundamental spaces like the row space, the null space, the column space, the left null space and their dimensions, right? And so if we're given some matrix, like in this example, we're given three, three by five matrix can and we're given some information about its row space, its column space, its left null space, or it's who would I forget column space, null space, row space, if we're gonna be information about those, what are the relationships we know? So the fundamental theorem of linear algebra tells us things like the rank and the co rank are equal to each other. And this is equal to the number of pivots that you'll find in the matrix in its echelon form. We know that the rank plus the nullity, that is always equal to the number of columns, aka n, as we often write this as an m by n matrix. And then we also know things like the co rank plus the co nullity is always equal to m, the number of rows in the matrix. And like I said, the rank and co rank are always equal to number of pivots. And so this tells us things like the nullity is the number of columns minus the number of pivots, the co nullity is the number of rows minus the number of pivots. So be able to compute these values given some of the information that's provided here. So like the rank nullity theorem as part of the fundamental theorem of linear algebra is what we would we want to study there, you can see more about this in section 4.6 inside of the textbook. Question number two is going to be a question about eigenvectors and that is recognizing you have an eigenvector. And so you'll be given a matrix a like you see right here, and you'll be given six candidates for vector, you can see it kind of slid off the page a little bit. There is another response over here. I hope that's okay if we don't show all the responses. And as always, I should mention as a reminder that the solutions to this test can be found in the same PDF document to scroll to the end of the document, you can see all the solutions in front of you. If you want to check your work and things. So can you identify what and whether you have an eigenvector for a matrix or not? Well, this one, you're going to be given six vectors and you're not asked to find the eigenvectors of just some arbitrary matrix, you're trying to find the are the eigenvalues of this matrix with these candidates. So at the very least what you could do is you could just take all six vectors and just multiply by a until you recognize you have an eigenvector or not, right? So with this one right here, if we were to do the multiplication, you know, you can carry through the thought here, the process here. And if it's an eigenvector, it should be some lambda times one minus negative two, it should be the original matrix. And I'm not claiming this is the eigenvector, it probably isn't. But can you check and identify whether you have an eigenvector for a specific eigen matrix or for a specific matrix, excuse me, don't don't ever listen to me when I talk about eigen matrices, I never I never said such a thing, it's a lie. And if you need some more practice, this one, you can go to section 6.1 of the textbook eigenvalues and eigenvectors, where we had some similar questions, examples and exercises, checking this very principle right here. Question number three is going to do with determinant calculations using properties of the determinant. So if you're given some information about the matrices, say A and B, in this case, we're given that they're both four by four matrices, we know some information about the determinants of A and B, A here is given as it's terminus two, B is determinant is three. And so then you're going to ask to calculate the determinant using properties of the determinant. So some properties that are going to be extremely relevant for us will be things like the following, the determinant of a product is equal to the product of determinants. Right, so the determinant factors in a manner similar to this. Other things we should know, like the determinant, if you take the transpose is unaffected by transposition, right, you get things. So the determinant of B is the same thing as B transpose. If you take the determinant of some scalar multiple of a, this is always equal to C to the n determinant of a, we're in this context, we're assuming a is some in my matrix, a common mistake that sometimes shows up here is that we sometimes think that the determinant is linear. And so we might erroneously think that this is C times the determinant of a, which is not the case whatsoever. The determinant is a multilinear function. And so you have to pull out factors from each row individually, not from the whole matrix. It's multilinear. This affects how one might try to take the determinant of a sum of two things. And so this is just these are just some examples of properties that determinants we've seen. Can we use those to calculate the determinant of these matrices here? The specific matrices will not be given just certain information about the matrices enough to determine the to calculate determinant, but you're gonna have these properties of these things. So if you need to learn more about properties of determinants, please turn to section 5.2 in the textbook, where there are some examples and exercises of those things as well. I always want to remind you that in the textbook, my students were only required to complete the spade, the spade questions, those that are marked with a spade, but there are plenty of extra exercises contributed by myself or by past students and such. Those exercises make good examples, good practice as you're studying for this exam. So take a look at those ones as well. Question number four, you're going to be asked to compute the orthogonal projection of specific vectors. And so if you need some review on that one, this is an older topic for this exam here. You can go back to section 4.5 for the specific formulas about orthogonal projection. But remember, as we're trying to do these orthogonal projections, this is this was using those linear combinations with the foyer coefficients and such, right? So if you have a if you have an orthogonal basis for your vector space w, the ideas that the projection onto w of the vector y, this will look like the sum where I ranges from one to say r, where you're going to take the vectors that are in the basis will take like u i dot y. And this sits above u i dot u i times u i like so. And so this fraction right here with the inner product on top and bottom. This right here is our foyer coefficients. They're the coefficients that show up inside of this orthogonal projection. And so then that's the scalar that's involved in these linear combinations, then you have to scale it by the vector u i here, which what is u i, our vector space w is going to be the span of these things right here, u one, u two, up to u r, things like that. And for this to be effective, this is going to have to be an orthogonal basis. It doesn't have to be an orthonormal basis, but it does have to be an orthogonal basis. You will notice that in this example, there is only one vector that spans all of w. Well, as long as that's not the zero vector, which it ain't, you can see that right here, it's not the zero vector. Since it's not the zero vector, a single vector by itself does form an orthogonal basis. And so we can calculate the orthogonal projection. You want to know this formula, put it on your note card as you're preparing for this exam. And that's might be true for some of these other properties and formulas we saw in previous exercises, like those properties of determinants, you might want to put on your formulas as well, your formula shoot as well. And so then our last, well, I guess actually there's one more question to take that back. Question five on this first page right here is going to ask you questions about properties of, of eigenvalues with eigenvectors and such particular this question is coming from section 6.4 about symmetric matrices and Hermitian matrices and such. So for example, if we have a symmetric matrix, we are guaranteed that the eigenvalues are going to be real. That's also true for Hermitian matrices, we can guarantee the eigenvalues are going to be real. Another thing to look out for is triangular matrices. Remember, if you have a triangular matrix, which includes diagonal matrices, the eigenvalues are just the diagonal entries. So we can see that the eigenvalues of R are immediately one in three. Like so. So that's something we can pick up really quickly. Finding symmetric or Hermitian matrices should also be a very quick observation. So question number five is mostly going to be able to have you detect real eigenvalues really quickly on the other, you know, some might be a little more suspicious, but you should be able to find them really quickly. So this is not meant to be a super extensive type problem here. Like I said, if you look at the spectral theorem of symmetric or Hermitian matrices in section 6.4, you should be pretty okay for question number five here. Alright, and so now for the real, the for Rizzle last question on page number one, number six, you'll be asked to compute a determinant, it will probably be nothing too complicated, you see a two by two right here, I maybe ask you a three by three, which is not meant to be a super difficult calculation. If it's larger than that, like if I give you a four before, it's probably because it contains a large number of zeros, and you can cofactor expand across one of them, one of the rows that has a lot of zeros, so the calculations should be pretty quick. This is meant to be based upon things we talked about in section 5.1. Although we can use roll operations to help us calculate determinants in a more efficient manner, that's that's generally more useful for larger matrices. And you'll see a question like that later on in this exam. So this one should be just a quick one, like a two by two or three by three. And so you can use like these diagonal type of things to help us out here. Same thing for your three by three, right? So you have like ABCDEF, HGI, I hope I didn't skip a letter. Oh, I guess G generally comes before H in the alphabet. Whoops a daisy. But anyways, we've seen this thing before where you have this three by three matrix, you can always copy the first two columns, something like this, and then you look at the diagonals, you take this diagonal times plus that diagonal plus that diagonal, you'll add those ones together, then you subtract this diagonal this one, and I forgot one didn't I that one right there, there should be three going one way three going the other way. So the ones that go from the top left to the bottom right, those are going to be positive you add together. And those that go from the top right to the bottom left, or in this in these videos, they're blue right now, those are the products you're going to subtract. So again, these easy, easy, easy determining calculations two by two and three by three. Alright, and so the final question in the multiple choice here, it's going to be a question about cross products, can you do a calculation involving cross products whatsoever. So for example, this one right here, can you calculate that that cross product to you times three you, if the only thing we know about you is its length. So you should know the formula of cross products, you should also know properties of cross products, as they're listed in the textbook, in particular, you can want to go to section 5.4, if you want some more practice on cross products of any kind. And that brings us to the end of the multiple choice section. In the short response section, there's going to be three of these questions, there were six points each. Remember for the short response, the short response questions, please, please, please put your answer here on the line, right? This is where it goes. That's what it for. If you're a good student, and you want to be a successful here, put your answer here on the blank, right? That's what it's for. There is space provided. This is intention if you this isn't the point point of the space here is for scratch work. Unlike the multiple choice, there might be a little bit more to the calculation right here. You are welcome to put in the test, your scratch work in this area. You do not need to show work to get full credit. But if you do show your work and the correct and the response you put here on the line is incorrect, then you can receive partial credit with risk because of the scratch work that's provided here. So do do make sure you add some of that. Question number eight, given some vectors can we determine the angle between the vectors right? This comes from the dot product. We can take the dot product to help us find the angles between these things. So like there's the law of cosines that we've seen for vectors, which tells us that if we take the dot product of two vectors, u and v, this is equal to the length of u times the length of v times cosine of the angle between the two vectors. And so using this formula right here, we can we can solve for theta and do put this in degrees, the instruction to ask you for degrees. Unfortunately, if you don't have a calculator, I wonder what are you thinking? You can use a graphing calculator on this exam. But if you don't have a calculator, you can write it as like our cosine of whatever it is. Because at that point, I don't really care. But just for ease of grading, I'd prefer you to put in your calculator and give me the degree estimate and round it to like the nearest tenth of degree. Question number nine, you will be asked to compute the matrix representation of a linear transformation, giving two different bases, C and B right here. This is the type of thing we did in the very last section, 6.5, about linear transformations and similarity and things like that. I should have mentioned that the previous question with the law of signs, the last sign showed up in section 4.4 in the textbook. So you can go look there for some extra practice. But returning to question number nine, can you find the matrix representation given to bases and the direction does matter? This is going from B into C. And so remember exactly how do we how do we compute something like that? Well, I mean, if you want to, you can begin calculating the standard matrix. So if like A is the standard matrix of your transformation, so that would be using the standard basis of E's and such. We want to we have to compute T here with respect to C and B. We have to compute the matrix which has the property. I'm getting a little tongue tied here. Sorry, we're trying to get the well, I mean, I don't have it in front of me right now, but we do have the formulas in section 6.4 to talk about this change of basis formula. Basically, what we want to do is you're going to augment the matrix on the left. Sorry, the basis you're turning into on the left and on the right hand side, we're trying to look at what does A times this basis do right here. And so what I mean by this notation is kind of funky notation here because A is a matrix and B is a set of vectors. But what I mean is I want you to write each and every the image of each and every matrix going on right here. So you're going to get A times B one, you're going to get A times B two all the way down to A times B whatever in. So do all of those. That's if you have your standard matrix right here. If not, you can also just look at T of B one, T of B two, etc. Whatever. That's what we mean by here. This A of this A times B right here. Take the basis on the right act on it by the linear transformation which either use the make use the map T directly or you can use the mage the standard matrix that comes from it, whichever you prefer. And so taking that right here, C, which is the new basis, the basis on the left, A of B, A times B right there, you're going to reduce this thing. You're going to get the identity matrix here and then augmented over here is oh boy, there's not enough space. Slide it over a little bit. The identity and then on the right hand side, this will be your standard matrix here T with respect to B and C coordinates. Now I should mention that these matrix might not necessarily these matrices might not necessarily be square matrices. Like example, this one right here won't be a square matrix. So you might get rows of zeros popping up here. You can ignore those. Don't worry about that. Not a big deal. Like I said, you can see some more examples of this in 6.5. So do check that out if you have any questions. And that takes care of the short response section. That that's all there is that I was going to say about that. So the last six questions show up in the free response. Remember the free response. You do need to show your work in the space provided here work without it. I can't give you any credit. Even if you have the correct answer, I do need to see the work in order to give any credit. That's any credit for these questions here show all the work that's necessary. Question number 11, it's worth eight points. You'll be asked to compute a determinant. And this will be a larger determinant. We're talking like four by four, five by five, six by six, something like that. And this is in line with the type of questions we did in section 5.2. That is, you're gonna want to row reduce this thing, row reduce this to the RREF or some echelon form of this matrix. Because if you reduce a square matrix and echelon form, it'll be upper triangular. And therefore you can take the product with the diagonals. But remember, as you row reduce it, there's a cost that has to pay. If you do replacements, right, you add a multiple of one row to another. When you do replacements, these are going to be free. There's nothing that happened. Nothing you have to worry about. So replacements are free. If you ever do any interchanges, do be aware that it affects the determinant by a negative sign. So you'll change the sign of the determinant. And if there's any scaling, scaling, well, there's a way of handling. But personally, what I think of is like, when it comes to scaling, you want to factor, factor out the scalar from the row. And if you do that, you'll handle yourself just fine. And this is using the multilinearity of the determinant. And I do need to show all steps right here. As many of you will probably have a graphing calculator and calculating determinants of matrices as a standard graphing calculator function. I have to see the steps. Otherwise, I can't give you any credit, even if you write down the correct determinant. I have to make sure that you know how the calculation works and not your calculator. But by all means check your answer with the calculator. I'm perfectly happy with that. Question number 12, it'll be worth seven points. And so this right here, you'll be given a matrix, you'll be given an eigenvalue of said matrix. So you don't have to determine it. This is this is the real McCoy. This is a genuine eigenvalue for the matrix. And you'll be asked to compute a basis for the eigen space associated to this matrix. And this is something we did in section 6.1 of the textbook. What you have to do is you have to find a basis, find a basis for the null space of a minus lambda I, where in this case lambda is going to be the eigenvalue they give you. So lambda here is a two. And then a of course is the matrix they give you plug that in right there. So you take a minus lambda I you calculate the null space of that. So the calculation would look something like a minus lambda I, you're going to reduce this to some echelon form, like so. If you don't want to show the steps of row reduction, if you use a calculator to calculate the REF, I'm fine with that. I don't need to see the row reduction there. But once you get to the once you get to the REF, you need to pull out your basis. And as the eigen space, the eigen space right here, this is this is the lambda eigen space as the as the eigen space is a null space finding a basis for the eigen space, you have to use your calculation base, the basis of a null space, excuse me. And so while that is reviewed, we did talk about that in section 4.6 when we talked about all four fundamental spaces of a matrix. Originally, we had talked about it back in Oh, boy, when was it section 2.6 section 2.5 and 2.6 talk about this basis of finding the basis of a null space. We've done it many times, you might not need a review, but in case you do, those are places you can go in the textbook to find those things. Question number 13, you'll be given a matrix A, and you'll be given some eigen some information about eigen pairs. So that as you're given an eigen value like your negative one and a corresponding eigen vector, you'll be given another eigen value and it's corresponding eigen vector. And this can repeat depending on how big the matrix is. And so then given this information on eigen values and eigen vectors, you're going to ask to find the diagonal the diagonalization of the matrix A, which in particular means we have to find matrices P, D and P inverse so that A equals P times D times D inverse. And so when you're doing this, remember that the matrix P you're going to stick in the eigen vectors of the matrix, which as these are given to you, that shouldn't be too hard to find them, right? The eigen vectors are exactly things like this that were given to you. That's what we want to put in there. The next matrix is D. This will be a diagonal matrix for which your eigen values are given along the diagonals and then everything else is 0, right? These diagonal entries are going to be the eigen values. So please recognize that you have those eigen values given to you, right? So we're talking like here this is negative on this three. These things are going in here and make sure that however you place the eigen values and the eigen vectors in P that you do the same order they have to go in the same order otherwise you won't have a legitimate you won't have a legitimate factorization. And then the last matrix P inverse right here well to find P inverse since we have P, my recommendation is to just apply the inversion algorithm that we've seen before. How do you find the inverse of a matrix? The inverse of a matrix we had talked about this in section 3.4 for the inverse of a matrix. You want to compute that. Now again your standard graphing functions have a function for computing inverses of matrices and I do want to see a little bit of that. What I would want to see here is you're going to take P augment the identity row reduce that into the identity augment P inverse although I don't need to see the row operations you can omit those here but I do want to see I want to kind of see that you understand this principle of it and then that will give us P inverse right here. So I do so given information about the eigenvalues, eigenvectors of a matrix can you find its diagonalization this is the type of stuff we were talking about in section 6.3 entitled diagonalization you will not be required to do the whole enchilada but you do have to wrap the tortilla so to speak the last part of it you have to know how to do that's question number 13 number 14 should be down here can we solve the least squares problem so you'll be given a linear system of equations it's probably inconsistent most definitely I'm going to make it inconsistent and I want you to solve the least squares problem find this least squares solutions which there could be multiple solutions so you'll be given this system AX equals B remember to solve the least squares problem you switch over to the normal equations see times left side the left and right by A transpose and then you're going to solve this linear system the solutions to the normal equations give us the least squares solutions this X hat and then you report X hat there if there's a unique solution tell me what it is if there's multiple solutions write the general solution the general least squares solution and so this is what we talked about in section 4.8 the last page we have question number 15 it's worth 7 points given a matrix you'll be asked to compute the eigenvalues of said matrix my recommendation is you want to compute first the characteristic polynomial the characteristic polynomial of the matrix and so remember the characteristic polynomial is you take the determinant of A minus lambda I treating lambda as a variable right and you know sometimes if if you wanted to you could take the determinant of lambda I minus A that's I prefer this 2 by 2 matrix it makes no difference but there they'll give you the same things that's that's what we have to do calculate this this will then give us some polynomial right f of lambda it'll equals you know for this is a quadratic one so it looks like some a lambda squared plus B lambda plus C something like that set it equal to zero and you're going to solve for lambda lambda equals yada yada and then those are the eigenvalues you're looking for so use the characteristic polynomial this is in line with exercises we did in section 6.2 alright question number 16 here number 16 this this question right here is a question about the Graham-Smith process Graham-Smith which the Graham-Smith process remember you you have to find an orthogonal basis for the for the sub basis provided right here so you have vectors x1,x2,x3 you want to orthogonalize that basis this is the type of stuff we did in section 4.7 so look there for some more practice remember there is an algorithm for this it's recursive like your first vector you don't do anything to it you just leave it alone for the second vector v2 you'll take the second vector and subtract from it it's orthogonal projection which I'll just write it out like here we just take v1.x2 over v1.v1 times that by v1 like so then v3 would equal x3 minus v1.x3 over v1.v1 times v1 then also subtract from that v2.x3 over v2.v2 and don't forget to put in the v2 over there and then you continue doing this if you have more and more and more for real vectors it's not so bad but for complex do remember the order of your Hermitian product does matter so make sure that the x is showing up on the right and the v is showing up on the left to get that correctly and this will help you find an orthogonal basis if you have to find an orthonormal basis do be aware that you have to normalize the vectors the Gram-Smith process does not necessarily give us unit vectors but we can always normalize when this thing is over with so do make sure you look for that before before you make sure you're done this one only is this one only asks for orthogonal basis so it's not a big deal but be aware that it might be orthonormal it might be orthogonal depends on the instruction right there and so that gets us to the end of the exam oddly enough and I apologize I think I forgot a question so we're going to go back up in the short response my spider-sense has been tingling that I only did two questions and I think it's probably because I forgot to slide to the bottom of the page with the short response so if you've been paying attention number 10 was missed I apologize for that question number 10 in the short response is going to be a question about the left null space of the matrix so a matrix A will be given to you and you'll also be given information about if you have the matrix A augment the appropriately sized identity matrix this will reduce into this thing right here you are expected to find a basis for the left null space and if you take A augment the identity and reduce it to some REF like we see right here this is in fact row reduced you have the pivots here in the first and second column this is a this is an REF of A augment I and you can honestly get away with any echelon form but to find a basis of the left null space all you have to do is grab these vectors on the right hand side that coincide with rows of zeros and so then your basis will just be those two vectors right there that's fine or column vectors whatever whichever you prefer in that case so whoops a days we almost missed that one but that shouldn't get us through all 16 questions on this exam and so this does cover the the main important topics from chapter 4, chapter 5 and chapter 6 about orthogonality determinants and eigenvalues and this is going to be our last exam for the semester as always if you have any questions feel free to reach out to me you can either of course just leave comments in the video below I'll be glad to answer them there or you can come to office hours send me an email and I'll be glad to help you as you're studying through this test remember the solution to this test can be found at the end of the exam and the test that you will see for exam 3 will be a very similar structured test to what you see in front of you right now alright everyone best of luck in your studying and I will see you later have a great day everyone bye