 Welcome back to our lecture series Math 4230 Abstract Outdoor 2 for students at Southern Utah University. As usual, be your professor today, Dr. Angel Misseldine. Now, if you've been following our lecture series prior to lecture 22, you'll notice that we've spent the last several lectures discussing the theory of rings. In particular, our focus was on factorization of elements in a domain. Particularly, we were very interested in factorizations in polynomial rings. For which we had finished that conversation in the last previous lectures, including lecture 21. Now, moving forward to the study of fields, some important notions from linear algebra need to be reviewed. So for students at Southern Utah University, of course, linear algebra would be covered in the class 2270, for which you can take a look at my flagship channel with regard to that. Linear Algebra done openly. There's a lot of good videos there. We don't necessarily need to go through all of that. In fact, in many aspects, we need to go deeper into the theory that often doesn't get covered in Math 2270, a standard first semester class in linear algebra. And so we're going to review over the next few lectures, some important topics from linear algebra. But in some respect, it'll be just as easy to talk about linear algebra as it would be to talk about the broader category of called modules. And so with the text that we're following in this lecture series, we follow Tom Judson's abstract algebra theory and applications textbook here, for which we are now in chapter 20 talking about linear algebra and such. And Judson's textbook does not delve into the topic of modules, sometimes called R modules, where R represents a ring in that setting. Now, we are willing to go a little bit deeper than what the textbook allows. That's perfectly fine. And the thing is vector spaces are our special types of R modules. Vector spaces are exactly those R modules for which your scalar ring R is a field. And there are some important characteristics you inherit because it's a field. So vector spaces is a much more restrictive domain category, I should say, for general R modules. And so while we aren't going to study the general theory of R modules, there are a few things I do want to point out that make sense in that general setting that'll do us some favors in the long run. So if we're going to start talking about modules, we have to first make sure we understand what it means to have a ring action. In this lecture series, we had previously introduced the idea of a group action. And I alluded to the fact that any algebra potentially could act on another algebra so long as there's appropriate compatibility conditions between the two algebras in that setting. We did the case where a group is acting upon a set, which a set essentially doesn't have any known algebraic structure to it. But the fact that a group can act on the set does mean that there is some algebraic structure on a G set that inherits from the group that acts upon it. The setting that we are going to talk about with a so-called ring action is in fact going to be a ring acting upon an abelian group. So that's the framework we're going to build this upon. Let V be an abelian group. And if you're wondering why V, well, it's because I'm thinking of a vector space like from linear algebra. And you have R as a ring. A ring action, or sometimes this is just called scalar multiplication where the elements of the ring are called the scalars and the elements of the abelian group are called the vectors. A ring action or a scalar multiplication is an action of R onto V. And so that means we have a function whose domain is R cross V and its co-domain here is V. So this would be particularly an example of a left action, a left ring action. We could have a right ring action. Now, if you have a commutative ring, it doesn't make any bit of a difference whether you have a left action or a right action. So you actually take this by action in that situation on both sides. But when your rings are non-commutative, you do have to sometimes distinguish between a left action and a right action. That's not a distinction we're going to have to worry about in this lecture series. So just for the sake of it, we'll just keep with our left convention and take this to be a left ring action, which means that when you take a pair, you take an element of the ring, which we call that a scalar. And you take an element of the abelian group, which we're going to call a vector. Now, a notational convention that we're going to use here is that when we talk about vectors, the elements of the abelian group that's being acted upon by the ring, we're going to typically write that in boldface font. Although that can be sometimes difficult to write with a pen. So you might just see me draw something like a V with an arrow on top of it. That represents we are a vector and we belong to the abelian group here. On the other hand, the scalars, which are going to be elements that belong to the ring, we're just going to use regular font to describe them. And so when you're given a pair, a scalar, and a vector, we then get a vector, which we'll denote as R dot V. That is to say that R is being multiplied onto the vector V there. But sometimes we'll even drop the dot entirely. And just by juxtaposition, write this as V, excuse me, R V in that situation, much like one does in linear algebra, where your scalars might come from a field like the rational numbers or the real numbers or the complex numbers. And then in that situation, your vectors are typically going to be list or arrays of numbers. And we just write it as multiplication because we think of it as a multiplication, scalar multiplication. Now it's not a multiplication of vectors. We can add vectors together because it's an abelian group. Because it's an abelian group, we're going to write its operation as addition. So we get V comma plus in that situation. So scalar multiplication gives us something different. We can multiply together a scalar and a vector. We don't necessarily have multiplication of vector times vector in this setting. We do, of course, have multiplication of scalars together because our scalars come from a ring. So that's the framework for which we can have a ring action. We have a ring acting on an abelian group. But like I said, there has to be some compatibility. The algebra from the scalars has to interact appropriately with the algebra of the set. Well, in this case, the abelian group, right? The actor has to be compatible with who's being acted upon. And so how does the ring axioms and the abelian group axioms interplay? Well, in some regard, the first two axioms are going to look just like they did with group actions. We have an identity axiom. So if you take any vector inside the abelian group called V, then one times V is equal to V. So I guess I should remind the viewer here that in our conversation, we are assuming that R is a commutative ring with unity. So whenever I say R is a ring, I'm assuming commutivity. I'm assuming unity. That's not how all people assume it when we talk about a ring. And in fact, earlier in this lecture series, didn't always assume commutivity, didn't always assume unity. But in this context, for the rest of our lecture series, a ring always means a commutative ring with unity. So we have unity and the unity acts on vector to give you back the vector. So the action of the unity of the ring is in fact the identity action. Okay, group actions did that same thing. They also had this compatibility axiom, sometimes called a homogeneity axiom. And some people call it even associativity because it looks like an associativity axiom. What I mean by that is the following. Take any vectors V and any scalars R and S. Then if you take the double action, so S acts on V and then R is gonna act on SV. This is the same thing as the product RS acting on V. I mean, like I said, this does kind of resemble an associativity axiom for which if you drop the dots and you just have R times SV, this is equal to RS times V. And so basically what we're saying is scalar multiplication is an associative operation. But one has to be careful here. The operation in these two situations is scalar multiplication. But in this situation, it's ring multiplication. That's the product in the ring. The compatibility axiom is basically saying that scalar multiplication is the ring multiplication. And so there's no reason to distinguish between the two notions, okay? Now, if we just stop there, we have a group acting upon a set, which honestly when it came to these axioms, the group action axioms didn't require anything to do with inverses. So it makes sense to have a monoid, which is a semi group with an identity. That is, it's an associative operation with identity. A monoid can operate on any set, no big deal. But why is it a ring action? What's so important about being the ring? Well, a ring has addition and the Abelian group also has addition. So how do those additions interact with each other? That's where the distributive laws come into play. There really are two distributive laws in this situation. We have a distributive law over scalars and a distributive law over vectors. They look very similar, but it's important to keep track of the important differences here. So in the distributive law with respect to scalars, you take any vector v and any scalars r and s, then we have that r plus s acting on the vector v is the same thing as r acting on v, s acting on v, and then we add these things together. So the important thing to remember here, and well, first of all, if you drop the dot, this will look like r plus sv is equal to rv plus sv, like so. And so when you write scalar multiplication as actual multiplication, this looks like a distributive law, which is why we call it that. But the important thing to note here is these additions mean different things. On the left-hand side, r plus s means you're adding together scalars in the ring. On the right-hand side, this means you're adding together vectors in the abelian group. And these additions could be very alien from each other. They might not be representing the same thing whatsoever. But because of the distributive law in this setting, it basically is telling us that the two things do mean the same thing, that adding together scalars is equivalent to adding together vectors with regard to this distributive law. We also have the distributive law for vectors. So this time, if we have two vectors, u and v that belong to the abelian group, and we have a scalar r that belongs to the ring, then if you scale the sum, u plus v by r, that's the same thing as scaling u by r and scaling v by r individually and then adding them together. Now, in this situation, you do get that both of these addition signs mean the same thing, means add together vectors, add together vectors, but the scalar multiplication distributes over vector addition to get us this right here. And so these are the actions that are required for a ring action. The ring r is said to act upon the abelian group v, and an abelian group v equipped with an action r, from the ring r, I should say, is called an r-module. If the ring in play is understood, sometimes these are just called modules for short, but the r-module does matter here because the same abelian group can have different ring actions from different rings, and therefore it can be an r-module. An s-module here, a t-module there, so the action does depend on the ring itself. And this is analogous to the idea of a g-set we had talked about before when we talked about group actions. We have a set, but then the group acts upon it, and different groups can act upon the same set in slightly different ways. So it could be a g-set and an h-set. It's sometimes important to emphasize the actor in that situation. So we talk about an r-module. Now, in the very special case that r is a field, then we say that v is a vector space. So a vector space is a module over a field. And like I said, the elements of the vector space are called vectors. The elements of the ring that's acting on v here are called scalars. Now, in general module theory, some people don't use these terms. They don't talk about scalars. They don't talk about vectors. Some people only reserve those terms, vectors and scalars for linear algebra when we do vector spaces. In our lecture series, I will be much more broad in that setting. I will call the elements of the abelian group always vectors and the elements of the ring always scalars. And I won't bother to call them something different if you're in an r-module. And so the study of r-modules is a generalization of the study of vector spaces, aka linear algebra. So many concepts about linear algebra transfer to r-module theory, but some aspects it broadens and diversifies. So we're not going to study r-modules in its broadest setting because our goal really is vector spaces, linear algebra. But when the theory is similar, I do want to bring up the similarities and talk about r-modules. And when things diverge from each other, I do want to point some attention to that just so you're aware. Because v is an abelian group, it contains a zero element. That zero element is typically called the zero vector, a boldface zero, or you might have zero with an arrow over it. And so it's the additive identity of, in fact, that... It's going to be the additive identity of the set v there. Conversely, r is a ring and has a zero element. So therefore, there's a zero scalar. We typically are just going to call that zero with no change of font whatsoever. Let's talk about a few types of r-modules that we've actually seen before, believe it or not. In this example, let r be any ring, and we are assuming r is a commutative ring with unity. But admittedly, this example would work for any ring whatsoever. But we'll assume commutative ring with unity. And let n be your favorite natural number, which does include the possibility that n is equal to zero. Then we can form the set r to the n, which is going to form an r-module where elements of rn... This is just the Cartesian product. So this is going to be r cross r cross r cross, and we do this r times. And therefore, a typical element of rn is going to be a list of numbers. It's going to be an ordered n-tuple, which some people will draw that as a vertical array. So like v1, v2, up to vn. So you want to think of it as a column vector versus a row vector. It doesn't really make much of a difference in that situation, but we can talk about that very thing. So rn, because, I mean, r itself is an Abelian group. And so if you take the Cartesian product of an Abelian group n times, you're still going to get an Abelian group. So that's the additive structure v that we're considering right now. What's scalar multiplication going to be in that setting? Well, you're going to times a vector by a scalar r, and then what you're going to do is you're going to time each of the coordinates by r. Because after all, since the v1, v2, all the way to vn belongs to a ring, we can multiply them together. And I will leave it up to the viewer here to verify that this choice of scalar multiplication does satisfy the four axioms of our module. Of course, if f is a field in the situation, this gives us a vector space, and this is how we define scalar multiplication in the vector space, rn, which of course in that setting, you probably mean rn, like it's the real numbers, because that's a real number. That is typically in math 2270, you focus on real numbers, real vector spaces in that setting. Before we go into any bit farther here, I should mention that when we defined an r module, and therefore defined a field, I only listed four axioms. But in math 2270, people define vector spaces typically using eight axioms, or some people use 10 axioms. What happened to the other four? Well, I should mention that situation that the first four axioms you get for a vector space have only to do with addition. You say that addition is associative, addition is commutative, addition has an identity, aka the zero vector, addition has inverses, aka there's negative vectors. Well, those four axioms, if you just leave them alone, that's the axioms of an abelian group. So that was built into the recipe there. It was baked in the cake. The fact that v was a abelian group is where we get the axioms of vector addition. Then the other four axioms about the ring action, identity, compatibility, the two distributive laws, those are the other four axioms that would join together and give you the eight axioms of a vector space. But like I said, some texts take 10 axioms, but two of the axioms are closure axioms. We're saying that addition is a well-defined binary operation, all right, and that scalar multiplication is a well-defined function. That is the sum of two vectors is a vector and a scalar times a vector is a vector. The definition of binary operations and actions already has that built into it, particularly when students first learn about vector spaces in a class like linear algebra math 2270, they might not have formal training on sets and functions yet. Those two actions are included to be explicit that, oh, yeah, we have a well-defined operation. But when a student understands function theory, and actually use numbering math 3120 transitions to advanced mathematics or it might be called introduction of proofs or something, in that setting, when you learn about sets and functions, it then removes the need of having those two axioms. So we would consider vector spaces having eight axioms, four axioms from an abelian group and four actions for the ring action. All right, so let's bring us back to where we are on the page. I claim that n could be a natural number, which then includes the number zero itself. What does r to the zero mean? It actually just means you just have a single element. In particular, this is the zero vector all by itself. This is the trivial abelian group, but it does have a ring structure for which you then times everything by r, you're just gonna get back zero, the zero vector in that situation. So yeah, we do allow r to zero to be such a thing. And when it comes to linear algebra, these were a field, this would be your zero dimensional vector space. Rn would then be the n dimensional vector space up to isomorphism. We can also, an important example to consider is actually r1, which would just be the ring itself. Every ring is actually an r module where it acts upon itself, much in the same way that groups act upon themselves by the regular action, left multiplication. A ring can do the same thing. And so a ring acts upon itself by multiplication. This forms a ring action. And then when r is a field, this would form a one dimensional vector space. Another example that I wanna consider is consider the set of n by m matrices whose coefficients come from r. So we have this n by n matrix in this situation. So you might have something like a, b, c, d, e, f. So we have an example of a two by three matrix. Well, we can add together matrices. We can subtract matrices. We can also scale matrices by some number r. These times each of the pieces by that. Now, if you only look at scalar multiplication and you only look at matrix addition, then as an r module, this is just the same thing as r. r to the m times n power. So you could just stack this as a column vector, a, b, c, d, e, f. So the reason why we sometimes like to think of it as matrices is because matrices, we can multiply together. There's matrix multiplication, particularly when you have a square matrix, any square matrix in by n can be multiplied by another square matrix in by n. And so then in addition to this, our module structure you have, you can make this matrix module into actually a ring itself. But that takes us beyond the scope of our conversation here. Polynomial rings can naturally be put into a r module structure because a polynomial ring, this is actually kind of what I was alluding to with the matrix ring just a moment ago, right? Well, matrices, you can't always multiply two random matrices together, but you can with polynomials. If you take two random polynomials and multiply them together, that is still a polynomial that will belong to r-adjoined x. In particular, if you forget the multiplication structure, this is an abelian group. We can add together matrices, and we can actually scale matrices, excuse me, we can add together polynomials. We can scale polynomials by times and by a constant, right? The scalars in this case are going to be the constant polynomials. You can distribute this, and so you end up with the following here, r times the polynomial is going to give you r times the first coefficient, r times the second coefficient, and all the way through, you're going to get r times all of those coefficients. So every polynomial ring is naturally an r module. In particular, if your coefficients come from a field, that makes f-adjoined x a vector space, and therefore linear algebraic things apply. Now, admittedly, this would be an infinite dimensional vector space, but nonetheless, linear algebra still applies in that situation. Now, this is a vector space we have studied previously in this lecture series. This would be the setting where we have a Euclidean domain. We have division algorithm for this ring here, f-adjoined x. We'll be very interested in this setting, which is why I'm bringing it up right here. But really, like we talked about polynomials before, r-adjoined x, you can really think of it as a ring extension of r itself, because if you look at just the constant polynomials, those together form an isomorphic copy of the ring r. And so what we're really doing here is we're taking a subring of a ring, and it forms an r-module. You can do this in greater generality. Suppose you have a ring s, such that r, which is also a ring, is a subring of s. Then s can be made into an r-module, because s, if you forget its multiplication, is an abelian group with addition. And then, since it's a ring, any two elements in s multiply together to give you something in s. In particular, if you take something from r and something from s, their product is going to be inside of s. And so if you take any ring, and you take a subring of that, then the extension ring forms an r-module where your scalar multiplication is just the ring multiplication, where you're requiring one of the elements to be r itself. So you're kind of forgetting some of the structure. Every ring can become a module over some subring where you restrict the possibilities there. So for example, the field of rational numbers is actually a z-module. In fact, every abelian group can be visualized as a z-module. Honestly, the two categories are one of the same thing. The category of abelian groups is the same as the category of z-modules. But after all, since z is a subring of q, you can very naturally view it as a z-module. Same thing with the complex numbers. The complex numbers contain the real numbers and so the complex numbers can be viewed as a real vector space. And this is something you sometimes do with linear algebra. You view as a vector space, you view the complex numbers as a two-dimensional vector space. There are algebraic benefits of doing that. There are also sometimes geometric benefits of doing that. We talked about the complex plane. That is to say we're thinking of the real axis and the imaginary axis. In that situation, we're really viewing c as a two-dimensional r-vector space because after all, r is a field, so its modules are vector spaces. I also want to point out, and this right here is where the money's at right now, the field q adjoined the square root of 2. We've looked at this example before in the lecture notes and in the homework. q adjoined the square root of 2 is a vector space over q because it contains the field q. q is a field and any ring that contains a subfield will be a vector space over that. And so q joined the square root of negative, the square root of positive 2 here is a rational vector space. And in fact, it's a two-dimensional rational vector space. And that says something about it. We want to start integrating linear algebraic theory into our study of fields. And this is exactly how we're going to do it. If you have a field which is then extended into a larger field, then the new field is going to be a vector space over the smaller field. And there's a dimension that happens here, and we get some other properties as well. But let's continue on with our examples here. If we take any set x, doesn't have to have any algebraic structure whatsoever. If we take the set of functions from x into r, where r is a ring, we commonly denote this as r to the x. x here is the domain here. This r to the x is an r module because how do we add things together? Well, a function is really, you can think of this as like an infinite-dimensional, potentially, vector space. Because notice our notation here, we had r to the n before. We're mimicking that, but instead of having a natural number n, we have the general set x in that situation. We can add together functions. How do you add together functions? f plus g, it's a function. If I know what it does to a random input, then I know what it does for everything. So f plus g evaluated at x, we define to be f of x plus g of x. So because r is a ring, we can add together ring elements, which r of x and g of x are ring elements. We can add them together. So we can have an addition of functions. This rule can be used to show that we have an abelian group structure because the addition on r itself is an abelian group. So we also have a ring multiplication in play here because r is a ring. And so if we take the function r times f, what does that do? Well, let's take an arbitrary input, x here. So the function f scaled by the number r when you evaluate x is just going to be r times f of x. Since f of x belongs to the ring, and so does r, we can multiply these things together. Sorry about the typo I had earlier. I just had to fix that on the fly. I hope that's okay. And so we just use usual function, addition and usual function scalars to form an r-module structure on these things right here. We can even multiply together rings functions themselves with this ring. And so we could get a ring structure potentially, but all I care about right now is the r-module structure. In particular, when r is a field, then r to the x is then going to be a vector space. So we could talk about, for example, the set of real-valued functions from x onto r. That makes an infinite dimensional vector space. One other important example that we should mention here as we enumerate all these examples is let r be a ring. In fact, r could just be an abelian group here. It doesn't make much of a difference. Take the set of endomorphisms of that ring, for which if r is an abelian group, this will be slightly different. And endomorphism is a homomorphism from r back into itself. If you're viewing r as an abelian group, you could take all of the abelian group endomorphisms. If you're viewing r as a ring, you're not going to have as many ring endomorphisms, but whichever category it doesn't matter. You take the set of all endomorphisms that is homomorphism from r back into itself, then r, the ring, is actually an s-module. So we can argue that this is a ring. Why is it a ring again? Well, you can add together functions using the same rule that we had up here, but instead we can multiply together functions and we don't use this rule. We actually use composition in that situation. So if you have like phi and psi inside of your ring there endomorphisms of r, you can then define their product as just the composite, which will be an endomorphism of that as well. So s itself is a ring that I want you to be aware of. And so then we can make r into an s-module where the additive structure of r is just the abelian group that's already present. But then the action of the ring is just going to be evaluation. So if you have some element v that belongs to r, how does an element s act upon it where s belongs to s right here? This will just be evaluation. All right. And so to connect this to linear algebra, if r is a vector space, because like I said, it doesn't even have to be a ring. It could just be a vector space, which has an addition structure, has scale and multiplication, but we just need to be in groups. Then the endomorphism ring, this is just going to look like matrices, square matrices. They could be singular, non-singular. It doesn't matter. In which case, then when you times a vector by a matrix, you just get the evaluation of the linear transformation. So all of these concepts, while we're talking about slightly general settings, all of these concepts are readily available and ever present in the topic of linear algebra. Even though we broaden our scope to be to be r modules in the setting, all of these concepts are directly related to linear algebra that we may or may not have seen before. So I want to conclude this video as we introduce r modules and vector spaces just to provide some properties of r modules. And these are properties identical to what you might see for a vector space. And in fact, I'm leaving them as proof to the viewer here. If you have an r module v where r is a ring acting upon that, let little r and s be any scalars in the ring and let u and v be any vectors in the r module. If you take a vector and you scale it by the zero elements of the ring, that always gives you the zero vector. If you take the zero vector and you scale it by any elements of the ring, you get back the zero vector. When you look at property d right here, this tells us that if you scale a vector by the additive inverse of negative one, which is the additive inverse of unity here, that's going to give you negative v. And likewise, if you throw in some negative sign in here, what this tells you is if you take a scalar multiple of a vector v and you look at its additive inverse, this is the same thing as scaling v by the additive inverse of r. This is also the same thing as scaling r by the additive inverse of v. Now you'll notice I skipped the middle one here. This middle one here is acting only true. Well, at least we will only assume it to be true for vector spaces. There are other settings for which it'll be applicable here. But this one property is not true for general r modules. So I did kind of misspeak in that situation. But in the case of a vector space, this is the case because if you have r times v is equal to the zero vector, if r is equal to zero, then that's going to give you zero. That's quite obvious. But if r is not zero in that situation, then we can divide by it. You can times both sides by one over r right here. Then you end up with v equals one over r times the zero vector, which then by the second property, that's equal to the zero vector. So in either case, you're going to get r equals zero and r v equals zero. And that's true for vector spaces because you can divide by scalars. But in a general r module, that this statement is actually not true. So I have to be cautious about that before we end this video.