 Welcome back everyone. In the previous video, we actually saw an example of how one can calculate a matrix representation for a linear transformation relative to any basis of the domain and co-domain Vs and W here. But it turns out that if you choose different basis for V and W, because there are lots of different bases one could choose for these two different vector spaces, choosing different bases will give you a different matrix representation. So how much do these matrix representation change and how much do they stay the same? And up to a point, they do actually kind of stay the same. It does depend on the basis chosen, but only up to a point. And so this theorem right here is gonna give us some explanation of what we mean by that. So imagine we have a linear transformation T which goes from V to W. And we have two different matrix representations. You have A and A prime. These are two different representations of the same linear transformation T. Well, let's assume that W is m dimensional and that V is n dimensional. Then there's gonna exist an m by n non-singular matrix P and there's gonna be an, that was m by n. Then there's an n by n dimensional, n by n non-singular matrix Q such that A equals A or P, A prime Q inverse. So there's some way of factoring one of the matrix representations into the other one using these non-singular matrices. And since this is non-singular, of course, you could rewrite this equation in the following way. We get that AQ is equal to P, A inverse, like so. And these, again, aren't any non-singular matrices. They have an inverse of some kind. And the basic idea behind the proof of this right here is that you're gonna take, these two different representations came from two different choices of bases. So let's assume that A, for example, A is the matrix representation of T where the domain has B coordinates and the co-domain has C coordinates right here. And let's assume that A prime is formed by taking T right here and you take B prime coordinates and C prime coordinates. So you have these two different coordinate systems right here. Well, then the matrix P, the matrix Q, let's start with Q, actually. Q is just gonna be the change of basis matrix where you go from B prime coordinates to B coordinates. And then the matrix P is just gonna be the change of basis matrix where you go from C prime coordinates to C coordinates. And so we can change from one matrix representation to another by some appropriate change of coordinates and change of basis right here. That's how these things are connected. So there's a really important special case of this that I wanna make mention right now that if your linear transformation is a transformation from a vector space back to itself, what if the start and stop are the same space V? Well, if you're starting and stopping in the same space V, then you actually could choose the same basis for both the domain and the co-domain. Let's say that basis is B. Then we can talk about the matrix representation relative to a single base, a single basis, and we'll denote that right here. A equals the, is the matrix representation of T relative to B. And this will then have the property right here that A, if you multiply the vector in coordinates, you'll get the output in those same coordinates as well. And so a really critical example of this phenomenon is actually the derivative, right? So we're familiar with the derivative from calculus one. We've probably seen this before here. And so let's actually construct some vector spaces with respect to this derivative. Let's take for example, some polynomial space. So we'll focus on cubic polynomials for a moment. So if we take the vector space that's spanned by the monomials one X, square and X cubed, be aware, this is the set of all polynomials of the form, A0 plus A1X plus A2X squared plus A3X cubed. And then of course, A0, A1, A2, A3 and A3 this last one. These are all just real numbers, something like that. So we can think of this polynomial space as a vector space. And we can represent vectors. If we take this generic element right here, this generic element, this generic cubic polynomial, we can identify it with the column vector A0, A1, A2 and A3. So this is a way of putting the vector into B coordinates, where B is this spanning set right here, this basis. Okay, and so if we were to consider what does the derivative look like? What's the derivative when we take all these different, if you take the derivative of all these different things, well, the derivative of constant is zero. The derivative of X here, whoops, derivative of X here is just one. The derivative of X squared is 2X. And then finally, the derivative of X cubed is equal to 3X squared. And so if we think about this in coordinates, in the coordinate system here, we start off with the vector 1, 0, 0, 0. And when we take the derivative, let's see, we'll write d, dx, and we'll put this in coordinates here. If we take the derivative of that, we end up with just the zero vector. If we do the next one, we take this in coordinates, the vector is 0, 1, 0, 0, this maps to the vector 1, 0, 0, 0. Because when you take the derivative of X, which is the second vector in our basis, you'll get one, which is the first vector in this basis. Looking at the derivative of X squared, this in coordinates, X squared looks like 0, 0, 1, 0. But when you take its derivative, this will become the vector 0, 2, 0, 0. The derivative of X squared is 2X, and 2X is two times the second basis element here. And then lastly, if you take the derivative of X cubed in coordinates, X cubed is the vector 0, 0, 0, 1. When you take the derivative, you end up with 0, 0, 3, 0, right here. And so then putting this all into a matrix, we could then say that the derivative, the derivative here with respect to B coordinates, this will be the four by four matrix, which we get 0, 0, 0, 0, 0, 1, 0, 0. I'm sorry, that's the wrong one. We wanted to do 1, 0, 0, 0. Next we get 0, 2, 0, 0. And then finally we get 0, 0, 3, 0. And so the derivative has a matrix, right? The derivative is in fact a matrix. It's this upper triangular matrix. It's actually strictly upper triangular. It's nilpotent. It's eigenvalues are all zero, right? Your eigenvalues are in fact 0, 0, 0, 0, 0. Zero shows up four times along the diagonal right there. And it turns out that this analysis of the matrix says something about the derivative map. Because after all, the derivative is a linear operation. So we can represent it as a matrix. Let's look at one more example here. Let's take the space w to be the span, the span of sine of x and cosine of x, right? Well, what's the derivative of sine? And we'll call these things C coordinates this time. Well, we know from basic calculus that if you take sine of x, its derivative is equal to cosine of x. And if you take the derivative of cosine of x, you end up with negative sine of x. And so if we wanna think of the derivative in terms of C coordinates, so if we think of it in terms of just sine and cosine, the derivative of sine is cosine. That's gonna give you zero, one. And the derivative of cosine is negative sine, which gives you a negative one, zero like this. And so the derivative map here is it's kinda curious here. The derivative is a rotation matrix. If you take the derivative of sine and cosine, you end up with a rotation matrix, which I always get this confused. This is 90 degrees rotation. It's 90 degrees counterclockwise. If I fudge there, sorry about that. And that's just kinda always blew my mind that the matrix representation of the derivative of trigonometric functions gives you a rotation matrix. This isn't some coincidence of any kind. It says that this derivative is a linear operation and we can represent matrices using, we can represent these linear transformations as matrices, which is really a cool feature right here. And so a corollary to the theorem that we had solved previously is that if you take a linear transformation from V into V, well, then you can use the exact same basis for the spaces V and V here. And so if you have two different matrices A and B, two different matrices, then there's gonna be a single non-singular matrix P so that A factors as P B P inverse. And this is just a special case of the previous observation where you can actually set P equal to Q from that previous theorem. And particularly this shows you that if you look at matrix representations where you can use the same basis for the domain and co-domain, then the matrix representations have to be similar with each other. And so because they're similar, anything that's invariant over a similarity class must be always the same. So we could talk about the eigenvalues, the eigenvalues of a linear transformation because for similar matrices, they have the same eigenvalues. We could talk about the determinant of a linear transformation. We could talk about the trace. We could talk about the rank of a linear transformation. These are words that often describe properties of a matrix, but we can also talk about these terms with respect to a linear transformation because it doesn't matter which matrix representation you choose, the eigenvalues don't change, the determinant doesn't change, the trace doesn't change. These are all things that are constant over the similarity class. And so as a final example right here, let's consider a map from R3 to R3, given by the following rule X, Y, and Z. If they're given, we can compute a vector in R3 by combining the vectors. You can see them right there. If you were to consider the matrix representation, this is the standard matrix representation for R3, then T looks like the matrix five, two, zero, three or two, three, negative one, second row and last row, 26, 16 and negative two. Sounds like a perfectly good matrix representation, right? Well, what if we come along and wanna use a non-standard basis? What if we wanna use this basis right here, the basis B, where the first vector is two, negative three, one, the second vector is negative one, two, two, and the third vector is negative one, one, negative two. What would be the matrix representation with respect to this non-standard basis? Well, like we'd have to compute before, like we had to do earlier, we have to figure out, what is, if we take AB1 and put into B coordinates, what does that look like? AB2, AB3, put them in B coordinates. So we have to row reduce the matrix right here, B augment the image of B when we hit it by matrix multiplication of A. So you see over here, this is our basis B. This is right here, this is the image of that basis B if you multiply it by A. And so again, I'm not gonna go through the details of row reduction, but if you row reduce it, well, B, since it's a basis, we'll row reduce to the identity. And then look over here, this reduces to a diagonal matrix whose diagonal entries are one, are two, one, and three right there. And so this diagonal matrix represents the matrix representation of T with respect to B coordinates. And you'll notice that this is a diagonal matrix and multiplying by a diagonal matrix is much, much easier than multiplying by that matrix we saw before. And why does this matrix representation provide something much simpler than the previous standard representation? Well, the standard representation came from using the standard basis E1, E2, E3. This matrix representation came from using an eigen basis. You'll notice that these vectors right here, vectors B1, B2, B3, this is an eigen basis. If you take A times B1, you end up with two times B1. If you take A times B2, you end up with one times B2. And if you take B3 and times it by A, you end up with three times B3. And so because you have a, since you have a eigen basis, what you're doing is instead of representing the matrix by its standard representation, you can instead represent the matrix using its diagonalization, this diagonal matrix. And from a computational point of view, working, since we can always change from one coordinate system to another, we can solve the problem with this easier basis system. To use another calculus analogy, in calculus two, sometimes you're approached with problems, in calculus three as well, sometimes you're approached with problems in one coordinate system, but it's like, hey, this problem's a lot easier to do in a different coordinate system. So like instead of using Cartesian coordinates, sometimes you switch over to polar coordinates or maybe you wanna use spherical coordinates or things similar to that, right? A cylindrical coordinates. You can change your problem to a more convenient coordinate system because instead of just a three by three matrix, which needs nine bits of information, we might actually be able to get away with just three bits of information in order to do the same work of that same transformation. This is why in calculus, we change variables all the time. And this is also why in calculus, especially multivariable calculus, when you change variables, the Jacobian comes into play. The Jacobian is the determinant of the change of basis matrix of the determinant, not the determinant of the derivative, excuse me, as you're switching from one coordinate system to another. So the change of basis matrix of the derivative is present, you see that. Why the determinant? Well, as you remember before, determinants measure area or volume or hypervolume in the various affine geometries you're in. And so as integrals are trying to calculate volumes, areas, et cetera, the determinant is the linear way of measuring area, volume, et cetera. And the integral is just the non-linear way of calculating these same problems you see in linear algebra. Because after all, calculus is the following thing. You take all the principles we've learned recently in linear algebra. You take the linear algebra things like linear transformations, vectors, matrices, all of that stuff. You take linear algebra and you add to it limits. And this is what you mean by calculus. And so this is sort of like the secret, the number 23 so to speak, when it comes to calculus, what's the meaning of life? Well, calculus is just you take linear problems and you add limits to it. And therefore you can solve various non-linear problems with the technique of calculus. And so I wanna end our course with this sort of connection to calculus because it's a really nice thing to see. I mean, linear algebra is great because of the many algebraic and geometric applications we've seen here. But to see like the bigger picture, many people learn these linear algebraic tools because they're gonna continue deeper and deeper into calculus. And you really cannot divorce calculus from linear algebra in the same way that you couldn't divorce calculus from limits. You need the two processes together to really master the types of problems we get in numerical analysis, differential equations, statistics, et cetera. And so that brings us to the end of this video and to the end of our course. Great at you for everyone who made it all the way to the end right here. It was good to have you. As always, if you have any questions, feel free to post them in the comments below on this video or any of the videos you've been watching. Please like this video, subscribe if you wanna see cool math videos in the future. Just because you've come to the end of your first semester of linear algebra doesn't mean your mathematical journey has ended. Oh please, just let it begin. Let's meet up in some future videos and learn some other cool Catan tech next time. I hope to see you then. Have a great day everyone, bye.