 So continuing on what we were talking about earlier about how multiplication by a orthogonal matrix doesn't change the angles or distances, can we generalize that principle a little bit? And so if we take what's known as an, let's actually define what's known as an isometry or sometimes called a rigid motion. Now isometry is a function on our vector space fn to fn so set the distance between the vectors tx and ty is the same as the original vectors x and y. So isometries don't change distances between vectors. It preserves distance, preserves distance, distances whatsoever between vectors. Now the reason we take this, I mean if you look at the word isometry itself, it comes from the Greek, it would translate as same measure, same distance and so that's exactly what an isometry does. It preserves the distances between vectors. It's like hey, if you take what we just talked about a moment ago, multiplication by orthogonal matrices preserve differences. Oh okay, multiplication by an orthogonal matrix is an example of an isometry. Well what else? What else can kind of preserve distances? Because these isometries are these motions that are these transformations of the plane or your vector space that don't distort anything. They don't distort the shape, the angles, the distances, all of that's nice and preserved here. All right so orthogonal matrices preserve distances, what else does? Well another example of an isometry would be that of a translation. So take a vector b inside your vector space and the translation associated to the vector b is a map from fn to fn such that x maps to x plus b. Geometrically what's happening is the following. You have your vector x right here and you have some translation vector b which you just add to it and so you're just going to translate x right here to x plus b. So it just has the effect that if you take any point in the plane it'll translate that point to some new point by this constant factor b. This is what we do by translation here. Now I should mention that translation vectors are not linear transformations. Let's, oops too far, let's make a comment about that for a second. So a translation, a translation is not, is not linear, at least in general. I think I was going to talk about that later but I'll just make the comment here. Translations are not linear when your translation vector b is not zero. And the issue is that if you take your translation of the zero vector this will map to zero plus b which of course is equal to b which is not zero in this case. Every linear transformation maps zero to zero. Translations don't do that so this gives us a translation that's different than linear transformations but these are going to be isodaries. How do we know that? Well the idea is basically the following. If you take the distance between two vectors x and y using this map you want to take this is the distance the length of x minus y right here but on the other hand if you take the distance of tx and ty this is going to be the distance between x plus b minus y plus b and you can see that x plus b minus y plus b the b's are going to cancel out the translation cancels out and you get the same thing. So translations are isometries multiplication by orthogonal matrix is an isometry and we're going to see at the end of this lecture that essentially every isometry is a combination of these two principles. So before we do that let's talk about the the titular topic for today, affine transformations. So we can generalize the notion of a linear transformation the following way. We have a map t from a vector space fn to a vector space fm. This is a map and we call it an affine map not a linear a linear transformation an affine transformation. If there exists a matrix a this is going to be an m by n matrix. This will correspond to these numbers right here and so we have an m by n matrix and we have a vector b which lives in fm which is the co-domain of this map so that the function tx will map you to the vector ax plus b. So x is a vector in fn multiplication by a will translate this into a vector you'll go from fn to fm and then you can add a vector from fm to that and this gives you this gives you a value transformation so you get ax plus b. Now we've seen that linear maps linear maps send x to ax plus a just a x there right where a is the standard matrix of the transformation linear transformation. So these affine transformations are essentially doing the same thing how do you get something affine you just add on this extra translation vector right there and so in general your affine map won't be linear if the translation's non zero but an affine map is linear if and only if the translation zero and so I want to take a look at an example of this right here so let's consider the affine transformation associated the affine map from r3 to r3 associated to this matrix and this translation vector right here so if we take as an example t of what do we want to do let's take I have an example here somewhere let's take the image of two zero negative one so what this means is we're going to take this matrix right here times divided by the vector two zero negative one and we're going to add to it the vector we have right here so and so what happens well if we go for finger multiplication three negative one one times two zero negative one we will get the vector what do we get we get six plus zero minus one if we do with the second row we're going to get negative six plus zero plus zero and if we do the third row we're going to get 12 plus zero minus two we add to this the vector one two three if we simplify the matrix product we should get five negative six and ten add to that the vector one two three like so and then the final arithmetic here combine the two vectors that you're going to get six negative four and thirteen right and so the image of this affine map will be six negative four thirteen just a just a basic calculation one go through in this piece so calculation of images just adds an extra step of translation all right well another thing is the vector is the vector let's take the vector two negative two four is this inside the image of t because even though we're talking about affine transformation now the notion of image makes sense for any any function is it in the image there and so what that means is we have to solve the equation oops we have to solve the equation ax plus b equals this vector two negative two four can we solve this well if you subtract if you subtract the translation vector from both sides be aware this comes down to the matrix equation ax equals two negative two four minus b are more specifically we have to work with we have to do the augment or we have to reduce the augment to matrix three three negative one one augment uh two minus one is a one then so so the coefficient matrix is just going to be a right here let me just copy that real quick so we read to the first row we get negative three two and zero and then finally we get six negative three and two and so then here in the final column you're just taking the target vector subtracting from the b negative two minus two is a negative four and three four minus three is a one so it really just comes down to row reducing this matrix right here which if we do that you'll find out that the matrix a is actually non-singular row reduces to the identity and then what you end up with is two one negative four and so this matrix right here row reduces and so we see that yes the answer to the original question is yes we do have that our vector y so y is in the image of t and we can see this because t of two one negative four equals y and i'll let you verify let you verify that fact right there pause the video if you need to double check the multiplication there and so we can use a system of linear equations to help us answer questions about well is this thing inside of the image kernels don't really make much sense about my transformations because the kernel is the set of all vectors that maps to zero and because of the translation map well there's really just going to be one thing that maps to zero basically i mean one could talk about it but like with this situation right here it kind of came down to the matrix a right you're solving that equation ax equals something the original vector y minus b you have to solve that system if you're asking questions about like one to one is it one to one well since you have to solve this system a augment y minus b if it's one to one it really depends on the matrix a right here and if you're curious is it on to again it has it depends entirely on this matrix a right here that is the system of equations so much like how we did it with with linear transformations early in this course here if we want to show if it's on to let y be a generic vector subtract from it be right here and solve the system of equations which really it's not going to matter it depends entirely on a right here this matrix will be on to it'll be on to if we have a pivot in every single row pivot in a row of a that'll make the affine transformation on to what about one to one well this it'll be one to one will there be multiple solutions it will have multiple solutions only if there are non-pivot columns in the matrix here because the non-pivot columns give us free variables and so this this affine transformation will be one to one if it has a pivot in every column so you can see what this matrix a which is non-singular this associated affine transformation will be both one to one and on to because they have a pivot in every row and column there so in some respect affine transform very much like linear transformations right you take x and you map to ax plus b this will be linear if and only if the vector the translation vector b is non-singular it'll be linear if that translation vector is zero now it turns out you can extend you can extend your space fn into the larger space fn plus one and with this perspective every affine transformation can be visualized as a linear transformation in a larger vector space and the idea is the following take the system the augmented matrix you take a augment b where a is your coefficient matrix for the affine transformation b is your translation vector and then you're going to add an extra row so this thing right here you're going to add extra row to it and so on the coefficient side you're going to add a bunch of zeros zero zero zero and then on the augmented side you're just going to add a one here and so if a if a is a m by n matrix this one down here we have an extra row now so you're going to get m by sorry m plus one by n matrix and then there's an extra column to that and so if you take this matrix and you multiply by the vector x one so you just add this extra one there by matrix multiplication you'll end up with ax plus b times one which is b and then you'll just get if you do the second row you'll end up with just one there so there's this extra one that sticks on the bottom for the whole time but you can actually do affine transformations as matrix multiplication you just need this extra row that kind of acts as a placeholder here and so because of that this matrix right here this augmented matrix excuse me this augmented matrix right here this is what you refer to as the standard matrix the standard matrix of your affine transformation it's a little bigger than the dimensions of the vector spaces because you need this extra space so if we look at the example we did before you'll notice here is the exact same matrix a we had in the previous example and then here's the same translation vector b we just added this extra row all zeros and then a one don't panic about oh no this system's inconsistent the line there is your organization purposes the matrix a is over here and the translation b is right here we're not trying to solve a system of equation if you take this standard matrix and you times it by the vector what did we have earlier if you have the vector well what did we have before i'm sorry if we take the vector two zero negative one one this right here so we showed if we if we take this right here so we're trying to figure out what happens to this right here so t of two zero negative one if you do the matrix multiplication in this situation um you end up with the vector six negative four thirteen and one i'll let you double check that positive video if you need to if you look at just the first three vectors the first three coordinates there you end up with six negative four and thirteen so affine transformations can be turned into linear transformations if we enlarge the vector space and affine transformations are linear transformations plus a translation so in some respect affine generalized linear transformations but linear generalized affine it's it's it's neither here nor there the two are very closely related to each other all right and so to summarize what we've been talking about today we'll get to this our our our last theorem for this section here the Mazur-Olam theorem here which connects the notions of isometries with the orthogonal matrix we saw before so remember isometry is a function on a vector space for which distance is preserved so if a map t is an isometry then it turns out that it can be written as an affine transformation ux plus b where b is any vector for translation and u is any orthogonal or unitary matrix so every isometry is an or is an is an affine transformation for which you are using an orthogonal or unitary matrix and so one can then use this idea to characterize isometries for the plane that is an r2 and it's actually pretty impressive one can show that in the plane there's only four types of isometries one is just translation for which you in that situation just be the identity there's rotation or rotation around a point in the plane and reflection across a line in the plane if you want to do we've talked about rotations and reflections in the past if you want to rotate around the origin you just take you to be a rotation matrix which is orthogonal and you take b to be zero and if you want to reflect across like the x-axis you take u to be a reflection matrix like we did in the past and then you said b to be zero if you want to rotate around a point other than the origin or reflect across a line that doesn't go through the origin you do have to have a translation factor into that and then the last possibility is there's also something called a glide reflection glide a glide reflection is kind of like you take footprints in the sand so you get this alternating picture like this and so a glide reflection is a type of isometry of the plane this can be done with taking a translation combining it with a reflection if the reflection is parallel to the translation then you actually can form these glide translations and it turns out there's only four types of rigid motions to the plane it gets a little bit complicated the more dimensions you get uh because like in three dimensions you have glide reflections rotations regular reflections translations and such but you also get things like like screw translations that as you can rotate around a translating line kind of like if you were to put a screw inside of wood and such and then there's some other interesting things that there i encourage you to look it up online if you want to it's kind of a fun little topic there but that actually concludes section 4.4 thank you for listening if you have any questions or comments please post in the comments below if you haven't already done so feel free to subscribe to this channel so you can get further updates about linear algebra and other mathematical lectures that i have here at southern utah and if you actually want to take a look at the book linear algebra done openly look at the comments there's a link in the comments below i'll see you next time bye