 In our previous video, we talked about the idea of symmetries, symmetries of a mathematical set of mathematical objects. And symmetry always leads to the idea of groups, that if we collect together all the symmetries of an object, this always forms a group. But I told you that the context matters, right? If we look at it as just a set, that's the symmetry type. If we look at it as a geometry, it's a different symmetry type. If we look at it as an algebra, that's a different symmetry type. And so I want in this example to play around with the mathematical object R to the N, right? Because Euclidean space is a beautifully complicated mathematical object with a deep interplay between algebra and topology. So for example, as an algebra, you could call Rn a vector space. This is something we study extensively in linear algebra. Rn is a vector space with vector addition and scalar multiplication. When we give the dot product to Rn, it becomes a so-called inner product space, which is an even stronger algebraic object. But this inner product leads towards the geometry of Rn. With the dot product, we can discuss lengths and distances and angles. This makes Rn into a metric space, for example. We can talk about distances, distances and lengths, right? A metric space forms a topology on Rn by which we can start discussing things like convergence sequences, Cauchy sequences, completeness, compactness, limits, derivatives, et cetera, right? And so based upon the focus we have on Rn, that'll determine what type of symmetries are we talking about. Are we talking about algebraic symmetries? Are we talking about topological symmetries? Or maybe we're talking about both, right? Because like I said, the geometry of Rn can be defined using algebra. And you can probably go the other way around as well. You can use the topology, you can use the geometry to define the algebra of it. And so these things interplay with each other, so maybe we want symmetries that measure both. So the symmetry type of Rn depends exactly on what type of object we're studying for a moment. So if we view Rn as a vector space, what does that mean? Well, a vector space has two, it's an algebraic object that has two operations. So there's the first operation which we call vector addition, which this is a binary operation, meaning that we're going to take two vectors from Rn, we take two vectors from Rn, we combine them together, and this produces a third vector from Rn, and it's vector addition, it's a binary operation. We also have the notion of scalar multiplication. When scalar multiplication is not a binary operation properly, because with scalar multiplication, you take a scalar from R, you combine it with a vector from Rn, and this produces a vector R from Rn, right? And so this object against not technically a binary operation, this is what we might call an action. To be more precise, this is what we call a ring action, because the set of scalars is a field. And so it's acting on this module over here. But we have this, you don't necessarily have to worry about exactly what this is, but we have this ring action that is we're combining, we're taking together a set with another set. These are not necessarily the same sets anymore. And we combine together to give us one of the two sets we started off with, okay? So we have this action in play here. But this is still an algebraic structure here, again, not a binary operation properly, but the two things are very important to the algebraic structure of Rn, okay? So the symmetries of Rn, when we view it as a vector space, need to be permutations of Rn, which preserve the vector structure, namely vector addition and scalar multiplication. Maps that preserve vector addition and scalar multiplication are just, what we call in linear algebra, linear transformations, linear transformations. That is, these are exactly the functions on Rn that preserve these maps. So if we're looking for a map from Rn back to itself, we need a function which let's say, if you take two vectors, we'll say u plus v, this needs to equal t of u plus t of v. That would preserve the vector addition. So we just pause there for a moment. It needs to be a group homomorphism with respect to vector addition. But we also need that if you take t of R times v, this is going to equal R times t of v. So remember, scalar multiplication, you take a scalar times a vector and you get back a vector. So what this transformation needs to do is that if you take a scalar times a vector and then you act by the map, this is the same thing as acting by the map, which produces a vector and then you scale by that value. And so we need that this, that the transformation, it's a homomorphism with respect to the group operation of addition, but it also needs to be homogeneous with respect to this ring action that's in play. So the symmetries are going to be, they have to be linear transformations, but we have also, they have to be self, you know, self bijective. That needs to be a permutation, right? So the, the automorphisms of Rn are going to be those linear transformations, which map the vector space back into itself and you see one to one and onto. And this produces, in fact, a symmetry group we're already quite used to. We get the so-called general linear group, GLN of R, right? This is going to be a set of all, and it's going to give us all of the n by n matrices such that a times a n versus equal to the identity, right? They'd have to be non-segular matrices. So the general linear group, right? It has to be, every linear transformation, you could show this in linear algebra. Every linear transformation can be represented by a matrix multiplication, by matrix multiplication. And if you're going from Rn to Rn, that is, if you're looking for, you know, you're looking for some transformation that goes from Rn to Rn, it's a permutation. Well, matrix multiplication will send you from, if you have an m by n matrix, excuse me, m by n matrix, this will send you, this will be a map from Rn to Rm. So you need the number of columns and rows to be the same so that it becomes a map from Rn to Rn. And so every linear transformation can be represented as a matrix. If it's going to be one to one and onto, the associated matrix has to be non-segular. And so we get the general linear group. So the general linear group is the symmetry group of Rn if we view it as a vector space. Well, what if we switch our focus and we start focusing on the inner product space structure of Rn? Well, in addition to the, in addition to addition, right? So we have the vector addition that we had before, right? We had the scalar multiplication from before. That part has to be the same, right? So we, so because an inner product space is a vector space, but in addition, there's also this bilinear form of which we call the dot product, which we also use a dot. But I mean, scalar multiplication is usually written as a juxtaposition. Well, the dot product is actually written with a dot. The dot product looks like you have, you have a vector times a vector, but then it produces a scalar. So again, the two sets that you operate on, that the product is actually not belonging to that set. Like I said, this is what's commonly referred to as a bilinear form. We can give the details of what that definition means later, but that's, that's what's going on here. If you just take the first two operations, we get a linear transformation like before, right? But then when we put this together, we need, we were looking for bijective maps on Rn, that preserve vector addition, scalar multiplication, that'll make it a linear transformation. But then we also need, we need it to preserve this dot product structure. And so we get, actually, if we want something to preserve the dot product, it's got to look something like the following. You can represent it like a matrix. We need that ux dot uy is equal to x dot y. We need, we need to find those matrices that preserve the dot product in addition. Because, because if you're a matrix multiplication, we'll preserve addition and the scalar multiplication. We've talked about these before. This is exactly the orthogonal matrices, right? So we need an orthogonal matrix in this setting, which the set of orthogonal matrices is exactly what we called previously O of n, the orthogonal group. So the orthogonal group is the group of symmetries of Rn when we view it as an inner product space. It'll preserve vector addition and scalar multiplication because they are non-singular matrices, orthogonal matrices are, but also presumes the dot product because they're orthogonal matrices. We, that was a property of orthogonality we saw previously. Now from, from the inner product, we can define lengths and norms and distances and angles. Orthogonal matrices preserve all of that. We've mentioned that before. And therefore, the orthogonal group is the symmetries of Rn when we view Rn as both geometric and algebraic. So if, if Rn is both a geometric and algebraic object, then O n is its symmetry group. If you view it just as an algebraic object, you've got the general linear group as its symmetry group. Well, what if we only want to view Rn as a geometric set? Geometric structure, what we call from topology would be a metric space. Now a metric space is going to be a set equipped with what's called a metric function, where D would be a map from Rn by Rn. So you take two vectors and you combine together and you're going to produce a real number. And so again, this is what we call a metric. It has a similar shape to what we called the bilinear form earlier. It has different axioms like we need the positive definite condition, we need the triangle inequality, we need symmetry. And so if we are looking for distance preserving maps, then we get what we call an isometry. So an isometry, as the name suggests, it means same measure, same distance. Isometries are those maps which preserve distance. So we'll be looking for, we'll call it sigma here, for permutations on Rn to itself, such that the distance between two vectors, so sigma of say x, sigma of y, this needs to equal D of x comma y. So we want the distance between two vectors to be preserved. Now with regard to the orthogonal group, right? We had our orthogonal group earlier. The orthogonal group has essentially two types of matrices inside of it. You have rotational matrices, which are going to correspond to those orthogonal matrices with positive one determinant. And you also have reflection, reflection matrices, these are going to be those orthogonal matrices whose determinant is equal to negative one. Now in this setting of the metric space, where we don't care about the algebra, we only care about the geometry. Then we of course get a lot more variety on what we can get. Like just think about pictures in the plane, for example. If we wanted to rotate around the origin, right? That would be an example of an orthogonal transformation. We could also reflect across a line that goes through the origin, right? Those types of reflections would also be orthogonal reflections. But we have other others that we could put into play here. So for example, we could rotate around any point in the plane. It doesn't necessarily have to just be the origin. We could reflect across any line in the plane. And that reflection would be acceptable as well. Now I'm just drawing pictures in R2, right? But imagine the higher dimensional analogs of these type of things. Another possibility that you get is you get the idea of translation, right? So what if you have some point right here and you translate it by some fixed value? We'll call that vector v. In which case then you translate every point by that same fixed value. So we get these translation vectors. Now, unlike rotations and reflections, this translation could not be an orthogonal matrix. Because orthogonal matrices, which give us linear transformations, they are additive group homomorphisms. The identity of a homomorphism has to go to the identity. But a translation, as long as you're not translated by the zero vector itself, it'll move the zero vector. So translations could never be represented by an orthogonal matrix. But reflections and translations like reflections and rotations, they only are going to be orthogonal matrices if they're either around the origin or through the origin. But I want to mention here that in some respect, we can actually create, it turns out, orthogonal matrices and translations are all we need to talk about the isometries of Euclidean space. So think about the following idea. What if we have some point that we want to rotate around? So we have some point space that we're rotating like so. What we could do is we could first translate that point to the origin, okay? Then while we're at the origin, we could rotate around the origin and then we could translate it back, right? So we have like a minus V, which V is the location of this point right here. And then we could add it back here. And so this what we saw earlier is our proverbial socks and sandals predicament that if ever you have the fashion football discovering that you have socks and sandals on at the same time, you could take off your sandals, then take off your socks and then put your sandals back on and now you fix the problem, right? This is what we described earlier as conjugation. Conjugation is what we're talking about right here. So for example, if you have some type of rotation, you know, rotate around the point V, well, this is just going to look like you translate by negative V, then you rotate around the origin and then you translate back by V for which these two translations are inverses of each other. So this is translation by V, rotate around the origin and then translate by V inverse. Again, that's what conjugation is all about. And so what I claim is that using reflections rotations, which those together, if we do them with respect to the origin, those are orthogonal matrices. If you've been throwing translation, this captures every type of isometry. And to describe the symmetry group, that is, we want those isometries distance preserving maps that don't necessarily preserve the algebra. Then it turns out the symmetry group of RN is going to look like the following. We're going to form a set, which we call E of N, which as a set, it's going to be RN times O N. And the way we're going to think about is the following way. O N captures rotations around the origin and reflections across the origin. I should say across a line through the origin. So in some respect, the orthogonal group captures the important rotation and reflections that one can perform. RN, we can actually identify it with the translational subgroup, right? Because like we were doing earlier, we can identify translations with a specific vector in space. So this RN, we want to think of it as the translation of the plane. And so as we were saying earlier, every rotation and reflection could be described as a conjugate. So every rotation and reflection in the plane, for example, is just a conjugate of an orthogonal matrix with respect to translation. And in higher dimensions, analogs to this hold true as well. And so you want to think of this RN right here as the translations and this right here as reflections and rotations with respect to the origin. So this as a set is going to just be RN Cartesian product O N. But in terms of multiplication, we're going to come up with a different rule. So if we have say two vectors U and V inside of RN, so we think of these as translation vectors, we're going to have two matrices A and B inside of the orthogonal group. Then we're going to find our operation in the following way. So circle will be a binary operation from EN times EN to EN. And this will have the rule that UA circle VB, this equals U plus AV comma A times B. And so this is going to be an element of EN. Let's convince ourselves of that. Well, since A and B are orthogonal matrices, their product will be orthogonal. And this then the second factor works. How about the first one? Well, we need to get a vector in RN for which if you take a vector V and you times it by any matrix, you're going to get a vector in RN. And if you add two vectors together, you get RN. So this does in fact check out. I mean, it's a binary operation, but this is form a group structure. If it's a symmetry group, it should be. And one could argue that this thing is associated by arguing this is in fact the symmetry operation. But there's a little bit more detail that one would go into. Now, the easiest way to argue that this is in fact a group, particularly associativity is the hard part, right? The identity is pretty easy. You could check out that the zero vector combined with the identity matrix. This of course is an element of EN and it'll be the identity. You can from here derive what an inverse would look like. Associativity is kind of hard, but it turns out that the main argument here is you're going to... One introduces what's called a semi-direct product. That's actually what this thing is, a semi-direct product. So we might actually denote EN here as the semi-direct product between RN. So it looks like a cross, but there's a line over here and ON. And so the difference here is that unlike a direct product, which is actually just a special case of a semi-direct product, a semi-direct product is a direct product with a twist, right? You'll notice in the second factor, you just multiply the two things together component-wise. If this was a direct product, we would want to just take U, U plus V, but the things you add this twist to it. So instead of taking U plus V, you can think of it as U plus the identity V, but instead of the identity, we get some other vector of the matrix here. So you add together the vectors, but there's a twist to it first. And so hence we have a semi-direct product. It's a product between groups, which can be... It adds sort of like this non-commutative flair. Even though the sum of two vectors is abelian, we've added sort of like this non-abelian component, this twist, this action comes into play by the matrix. And so as you study more about semi-direct products, you would see that this is a very standard group construction that one gets quite used to. So let me kind of give you a special case of this. This right here is what's referred to as the Euclidean group, which is why we called it E of N. And that's because it's the symmetry group of the Euclidean geometry, Rn. In the special case where we have, say, R2, right? So this is the Euclidean plane. In that case, E2, this of course would be the semi-direct product between R2 and O2, like so, for which it turns out that every isometry in E2 can be classified in the following way. You're going to get rotations, which every rotation in the plane can be factored as a conjugate of rotation around the origin with respect to the translational vector associated to the center of the rotation here. You can write every another type of isometry here as you get reflections, which you can get reflection across any line in the plane for which if that line doesn't go through the origin, you can factor this reflection as a conjugate of reflection through the origin with respect to some translation. So like I was trying to say earlier, if you have some type of reflection, you could translate this to the origin, reflect across that line, and then translate it back. So every reflection can be then factored as a conjugate of an orthogonal reflection with translation. So translation also is in play here. And it turns out that you can show that the only other type of isometry in the plane is what's called a glide reflection. A glide reflection is actually just a combination of a reflection and a translation. You want to think of it in the following way. Think of it as like walking along the beach and looking at your footsteps. So if you take like an asymmetrical object like J, if I were to translate it and then reflect it, you're going to get something that looks like this following. And so that's what the glide reflection does. It's this combination. It's when you translate and reflect and they're perpendicular to each other. So the axis of translation is perpendicular to the axis of reflection. And so then if you repeat this process, you get what looks like footprints along the sand. And it turns out that in E2, every isometry can be written in one of these four types, rotation, reflection, translation, and glide reflection, which from a group theory point of view, glide reflection is just, of course, combining these together. And so basically from a group point of view, we're saying that every isometry on the plane is generated from translations, rotations, and reflections, which that becomes obvious by this statement right here. And that statement is also true as we go into higher dimensions. If you move to, for example, E3, every isometry is generated from rotations, reflections, and translations, but you can get some more exotic things. In addition to these, you get things like screw translations. You can translate along the line, but you're kind of rotating around the line. You get some fun stuff in higher dimensions. We're not going to delve too much into that right now. The Euclidean group is a pretty fun group when you start looking at it from this geometric perspective. And sometimes it's considered one of these classical groups we had introduced previously. It's not a matrix group proper because it's a semi-direct product between these, between this translation group, which is Abelian with this matrix group. But some people include it in this list, in the list of the classical linear groups. And so I want to present maybe one more example in this video here. What if we think of Rn just as a topological space? So we don't even worry about distance preserving. We just want continuous permutations. That is, if things were close together beforehand and then you take the permutation of Rn, things are still close together. Even though distance could be distorted, shapes could be distorted. We just want things to be close. That if you start off being delta close, then down guarantee that you're epsilon close after the fact. What about just continuous permutations? The topological space is sort of a weaker geometry. It's sort of like the weakest form of geometry for which calculus is still possible because the notion of a limit still exists. So if we want continuous permutations, we actually get what's called the affine group. The affine group. The affine group is formed in the following way. It's very similar to the Euclidean group. We'll call it f of n. And this is going to be Rn, semi-direct product. Whoops. It'll look like semi-direct product with the general linear group, GLN of R, in which case multiplication will look like it was just a moment ago with the Euclidean group. So if you have U, A, circle, V, B, then the product will look like U plus V with a twist time and then A times B right here. And so I want to point out to you that the Euclidean group can be viewed as a natural subgroup of the affine group where we're just removing the assumption that this is an orthogonal matrix. So distance preservation isn't necessary anymore. So if you think about these things, like if you take one of these affine maps, what that means is something like the following. If you take U, A, and you times it by the vector X, what this is going to do is this is going to give you U plus AX right here. And that's to say that X will translate, X transforms into U plus AX. So you do some type of linear transformation to the vector. That's going to be with like around the origin and then you translate by U. And that describes what we call an affine transformation, an affine map for which isometries are special types of affine maps. Isometries are those affine maps that preserve distance. So it preserves the metric structure. The affine group, it's also a semi-direct product. It just preserves the topology, just preserves continuity. So let's see, in addition to the, in addition to sort of like the isometries we had before reflections, translations, rotations, we also get like stretches, compressions, dilations. So things can get distorted. Like you can get, you can stretch things out. You can compress things down. That's a possibility because that'll change distance. You also get something like a shearing map. That's kind of an interesting thing where if you imagine like your shape is kind of like this stack of like cards on like a deck of cards. If you were to push it, then this would translate into some type of shearing map of some kind. So things kind of like lean a little bit. A shearing map can take like a square and it can change it into like a parallelogram like so. And so that's an example. Distance is not necessarily preserved, but things are still continuous in the action. The nice thing about shearing maps is that shearing maps actually still preserve area even if sometimes they're called transvections. A transvection will preserve area. So like the area here is the same of the square is the same as the area of the parallelogram. Of course, if you're stretching or compressing things, dilation will not preserve area. So those things will be lost. Parallelism can still be true when you do a lot of these things. Things that were parallel before will still be parallel after the fact. And so the affine group preserves all of the topological structure of RM. So the last thing I want to ask here is the special linear group, asymmetry group, right? We went through all the algebraic geometric in combination of RM. How do you view SN, SLN of R as a symmetry? Can it be done, right? I mean, if it's one of the class of groups it should be preserving some type of symmetry. What do we know about the special linear group? We're talking about all those matrices whose determinant is equal to one, right? So these are going to be affine maps, but they're determined equal to one. What's the geometric interpretation of that? Well, it can be shown here that if you're determined as equal to one, that actually implies that we're going to have area preserving, area preserving linear transformations. Area, it preserves area, area preserving maps. So like we talked about earlier with metric spaces and we got the Euclidean group, the Euclidean maps are all those that preserve all the symmetries. They preserve the distance function. Well, what if we want to preserve the area function? So if we preserve the area of this thing, so it's like the area of some type of shape we'll call it S needs to equal to the area of say sigma of S where sigma is this permutation in play here. And so all of these classical linear groups are in fact, symmetry groups of Rn when we take the right perspective. Are we trying to preserve algebra? Are we trying to preserve geometry? Are we trying to preserve both? And what parts of the algebra? Do we want to preserve just the vector addition in scalar multiplication? Or do we want to preserve the dot product as well? Do we want to preserve the norm? What parts of the geometry do you want to preserve? Do you want to preserve the distance between points? Do you want to just preserve the area? Or do you want to preserve maybe just something weak like continuity and we just want sort of a weak nose and things that were close are still going to be close again. And so we can get all of these asymmetry types if we want to, like we could talk about, oh, Rn translation would preserve area. So we could take Rn semi-direct product, the general linear group, right? Like so, so we could take this group which preserves which would preserve area together. The SLN of R here would be those permutations that preserve area and preserve the vector structure of addition and such. So yes, symmetry depends on perspective. What structure are we trying to preserve? And if we do that, we get these different types of symmetry groups. So I hope that these examples on Rn can give you an idea of what symmetry is all about. We're looking at groups that preserve structure of a set and the types of groups we care about depends on the type of structure on the set.