 This lecture is part of an online course on commutative algebra and it will just be a review of rings, ideals and modules. So we're not introducing anything particularly difficult or exciting in it. So let's just recall the definition of a ring. The reason I'm going to recall a definition of a ring in spite of the fact that everybody should know what it is, is that there are in fact four different inequivalent definitions of a ring floating around in the literature and I just want to be clear about which definition we're using. So if you've got a ring R, it's got two operations, addition and multiplication. And it's an abelian group under addition with an identity element zero. And under multiplication it's associative, ABC equals ABC and it's distributive on the left and the right. And AB plus C equals AB plus AC. And everybody agrees that a ring satisfies these conditions, but people disagree on two further properties, whether the ring is commutative, which means that AB equals BA, and whether or not it has an identity, which is an element one with one A equals A equals A times one. So I'll say a little bit about these two properties. First of all, most of the rings in this course are going to be commutative rings. It's a course on commutative algebra after all. But before dismissing non commutative rings completely, I just wanted to give a few examples of them. So here are some examples of non commutative rings. First of all, an obvious example of just matrix rings denoted by MNR, which is N by N matrices over R. For example, you might have two by two matrices with coefficients in the integers, for example. Another famous example is the ring of quaternions of elements A plus BI plus CJ plus DK, where ABC and D are real and I squared equals J squared equals K squared equals minus one IJ equals minus J I equals K and three other relations that I'm too lazy to write out. Then we have group rings. So here if G is a group, we can form a group ring Z of G, which has a basis of elements G for G in the group G. And the ring multiplication is defined as follows. We define GH to be GH. Okay, that looks kind of stupid, but what's going on here is this is the group product of G and H and this is the ring product. And the reason it looks stupid is we're using the same symbol for the ring product and the group product because they are actually essentially equivalent. So this means you can turn any group into a ring and in some sense you can turn a lot of group theory into ring theory. Next, an example that is actually not too far from commutative rings is rings of differential operators. So a differential operator might be something like sum of A, I, J, X, the I, D by DX to the J. So you can think of this as being an operator acting on, say, real functions on the line. Let's take A, I, J to be real. And then you can see that these operators actually form a ring because you can add them and multiply them. And it's not quite commutative because D by DX, like X minus X times D by DX is equal to one. And you've got to be a bit careful here because this is not the derivative of X. What it is, it's the operator multiplication by X composed with the operator differentiate with respect to X. So what this means is if you apply both sides of this to a function like D by DX of X of F minus X times D by DX of F, this is equal to F and this is just Leibniz's rule. Now, if you look at this equation here, what it says is that A, B minus B, A is simpler than A and B. Here I'm taking A to be D by DX and B to be X. So a commutative ring, you would say A, B minus B, A is zero, which is certainly simpler than A and B. In a ring like the ring of differential operators where A, B minus B, A isn't zero but is in some sense simpler, then these rings are actually not too far from commutative rings and quite often techniques of commutative ring theory will also apply to things like rings of differential operators. In fact, we will probably have an example of this later on in the course if I get round to covering Burnside polynomials. Well, a well-known example of commutative rings are just polynomial rings K of X, Y, which have a basis of elements X, D, I, Y to the J. You can also form non-commutative polynomial rings where we sort of adjoin X and Y to K using some sort of brackets. I think people differ in what sort of bracket to use. I have a vague bracket here where you think of these as being polynomials where X, Y is not equal to Y, X. And here a basis just consists of all things like one X, Y, X squared, X, Y, Y, X, Y squared and so on where now X, Y, Y, X are different. And what you do is the basis consists of all words in X and Y. And other examples I'll just mention. If you've done Lie algebras, then the universal enveloping algebra of a Lie algebra is another important non-commutative ring and Clifford algebras are more examples and so on. So now let's move on to the other problem about rings, which is do rings have an identity. And in this course, the answer will be yes. So rings will always have an identity. But before doing that, I just want to say a little bit about why, about rings without an identity and why on earth might you wish to consider them. And first of all, if we've got any ring R without an identity, then we can form a new ring just by taking a direct sum of Z and R and just forcing the element one to be an identity for R. So you can make this into a ring in the obvious way and it has an identity. So you don't really lose anything much by assuming all rings have an identity. So why don't people assume all rings have identities? Well, the answer usually comes from analysis. So suppose we form, say a group ring of let's just take a finite Abelian group, then its elements can be considered as functions from the group to say the reals. And we multiply two elements of the group ring and consider as functions from G to the reals by taking a sort of convolution. So this is just sum over A in G. So this applied to some element B will be the sum of F of A, G of B minus A. Here I'm taking the group to be Abelian for simplicity, although this isn't really necessary. And now let's try and do this for G being the reals. Well, how are we going to make sense of this? Well, we could try just saying that F is a function which is zero for all but a finite number of reals. So it would be a function with graph that looked like this. Which would be kind of stupid. It's not really taking the topology of the reals into account. So we can try making F G be continuous functions. Well, if we make them continuous functions, this sum will usually not make sense. So let's replace it by an integral. So we define F star G of B to be the integral of A in the reals of F of A, G of B minus A. So this is the famous convolution of F and G, which is really just a sort of product of a group algebra, except it's not quite a group algebra because we're taking topology into account. Well, there's a bit of a problem here because this integral doesn't converge in general. So does this converge? Well, it converges if F and G are continuous and we might say, let's say F and G have compact support. And then this integral converges and we've defined a nice ring. Well, there's only one slight problem with it. This ring doesn't have an identity element because the identity would have to be a function that was sort of one at the origin, and this just isn't continuous. So analysts sort of sometimes like to allow rings not to have identities because sometimes they naturally don't. There's another typical example. I suppose X is a topological space and let's make it locally compact. And let's look at the algebra of all functions on X that vanishes infinity. So this is another space analysts are quite interested in. And again, it doesn't have an identity element one because the element one doesn't vanish at infinity. Well, it has no identity one unless X is compact. So vanishing at infinity means that for any number epsilon, the function is less than epsilon outside some compact set. So if it's compact, then everything vanishes at infinity. And roughly speaking, rings without one kind of correspond to locally compact spaces. And rings with an identity kind of correspond to compact spaces. And so from the point of view of an analyst demanding that all rings have an identity element one is kind of like demanding that all locally compact spaces should be compact, which is kind of really stupid. And there's an operation of going from rings without one to rings with one, which is to take a ring are two. Well, I said you added the integers to the ring. But since we're doing analysis, let's add the real numbers to the ring. So this is the reals. I'm sorry, I'm using the word off the wheels and for a ring, but you'll have to put up with that. And this corresponds to taking a one point. Compactification. So if you've got a locally compact space and X, then C naught of X together with the point of infinity is just the reals plus C naught of X. So this, this ring has an identity and this ring here doesn't. And you see that the operation of adding an identity to a ring corresponds geometrically to taking a one point compactification of a locally compact space. And there's a kind of mildly entertaining example illustrating this suppose we take X to be the following space. Let's take it to be the interval one or zero to one. So it's a half open interval. So I'm going to look at the function space C naught of X. So it's functional and X vanishing is infinity, which means they tend to zero here. And now I can take the one point compactification. So I had a pointed infinity, which in this case the pointed infinity is really going to be the point one. And now we see that the joining unit to this turns it into the ring of all continuous functions on the interval naught to one. Well, I can also do the same thing with the product of two copies of X. It's kind of obvious what the product of two copies of a ring is. So this corresponds to the following space. It contains two copies of a half open interval. And now if I compactify this, what I get is this. Well, if I compactify it, then I'm really just getting another, I mean, this, you might think this is the interval from closed interval naught to two. But you know, the closed, this closed interval is isomorphic to this closed interval. So if we form this space here, this ring here and add R to it. So here I'm taking, sorry, this isn't the product of X and X. It's the union of X and X. So if I take the, this ring here, which is the product of two copies of C naught X. This is actually isomorphic to R plus one copy of C naught X. However, if I take three copies of this, this is not the same. Because now this corresponds to taking three copies of the half open interval and compactifying it. In other words, adding a point here. And now we've got a quite different topological space. So you see that although we can turn any ring into a ring with identity just by sort of forcing the adding an identity, this does actually lose information. We've got two different rings, but if you add an identity to them, you get the same ring. Here I'm assuming that all these rings have some sort of topology on them from which you can reconstruct the original space. There's some other things you've got to be a little bit careful about with rings with identities. So we recall we have a homomorphism of rings. It's just a map from F from a ring R to a ring S such that it preserves addition and multiplication. And there's one other really important thing. It also has to preserve the identity. And it's easy to forget about this because if you're using rings without identities, then you don't obviously don't have this condition. If you're working with rings with identities, then a homomorphism isn't the same as if they were rings without identities. For example, if you're working with rings without identities, then the homomorphism from Z to Z taking any N to zero would be a homomorphism. If you're the sort of person who thinks rings don't have identities, but isn't a homomorphism if you think rings do have identities, you have to be a little bit careful to remember this extra condition. Now I'll just review ideals. So ideal of a ring R. Here I'm going to take R to be commutative and with an identity from now on. So an ideal is just the kernel of a homomorphism from R to some ring S. And do you remember, there's an equivalent definition. If we call this ideal, I just means that if A and B are in I and A plus or minus B is an I, and furthermore, A times R is in I for A in I and R in R. You've got to be careful to note that we're allowing R to be an arbitrary element of the ring. We're not just insisting that I should be closed under multiplication. We're insisting that should be closed under multiplication by all elements of the ring R. For example, we have the integer Z is a subset of Q. So Z is a subring. It's closed under multiplication, but not an ideal. Because if you take an integer and multiply it by an arbitrary rational number, that's not necessarily an integer. So you've got to be distinguished between subrings and ideals that they ideals have their rather stronger condition on them. Incidentally, ideals are examples of furthermore examples of rings without an identity element. They satisfy all the axioms of a ring except they don't have an identity. And for this reason, some people even in algebra used to allow rings not to have an identity. So some typical examples of ideals and their quotients are as follows. So examples of how to the integers modulo N. So NZ means just all integer multiples of N, and that's obviously an ideal. Another example is if you take the ring of polynomials over X and you quotient it out by the ideal X squared plus one. This means all elements of the form R times X squared plus one where R is in the ring. So this is always an ideal. And so this is you take all polynomials over R and you quotient out by this ideal. And this is the well-known construction of the complex numbers. A third typical example might be to take a field of polynomials, two variables and quotient out by all multiples of Y squared minus X cubed plus X. And as we pointed out in the first lecture, this ideal is just the set of all functions vanishing on this curve, which looks something like this. So we can think of this ring as being something like polynomial functions on this curve. So that's an important application of taking quotient rings. It's to find polynomials on some curve or other more complicated algebraic subset. Finally, we'll just review modules. So module over a ring is like a vector space over a field. In other words, M is a module over R. We just copy the definition of a vector space. It means you've got a map from R times M to M, which is usually written as R times M. And we have the obvious rules R1, R2 times M equals R1, R2, M. And it's left and right distributive, which are not very exciting, R times M1 plus M2 equals RM1 plus M2. And the one rule you've got to remember is that one times M equals M. And again, people who work with rings without identity have a bad habit of missing out this action altogether. Fair enough if you don't have an identity, I guess you can't apply. But you have to remember that we're working with rings with identity and we do have this property for modules. People used to do commutative algebra using mostly ideals. And a century ago, people didn't use modules very much. Modules are actually in some ways much more flexible than ideals. So let me just give you some examples of modules. And there are lots of special cases of modules which were all discovered before the general concept of module were discovered. First of all, Z modules, which are just modules over the ring of integer Z are of course just the same as Abelian groups. Because you can multiply any Abelian group by an integer, obviously. And one of the things you try and do in commutative algebra is extend theorems about Abelian groups to modules over rings. For example, you know there's a theorem that says finite Abelian groups can be written as a sum of cyclic groups. So that gives you a structure theorem for Abelian groups. And we would like a similar theorem for modules. Well, there isn't really one, but there's a very weak generalization of it called the Lasko-Nurter theorem, which for Abelian groups is quite similar to the structure theorem for Abelian groups. So one thing we want to do in commutative algebra is generalize Abelian groups theorems to module theorems. Another obvious example is a K module for K of field. It's just a vector space, which is not very surprising because we copied the definition of a vector space to define modules. What about a module over the ring of polynomials? Well, this is just the same as a linear transformation. If you think about it. So a module over K of X, first of all, it has to be a module over a field K, so it's a vector space. And then it has to be acted on by all polynomials. Well, it's enough to say what X does. And X just has to act by linear transformation. So the whole theory of linear transformations is really the same as the theory of modules over the ring of polynomials over a field. Next, we have a sub module of R. So first of all, R is obviously a module over itself. And secondly, it's pretty obvious what a sub module should be. It's just a subset that's a module under the induced operation. And the sub module of R is just an ideal. And so we have four different names for things that are really different sorts of module. Incidentally, the name module comes from the fact that an old name for an ideal used to be a modular system. If you remember, Macaulay's book was called The Theory of Modular Systems. So that led to the, that was kind of how the word module was transmitted to the word module. The other thing you can do with an ideal is you can not only consider the ideal I as a module, but you can also consider R modulo the idea as another module. So this is a quotient ring. And you notice that the quotient ring R over I is still a module over R. So there are five different special constructions, all of which turn out to be just modules. Notice you can reconstruct I from this module M. So if this is M, then I is the annihilator of M, which is just the elements of R such that R of M equals 0 for all M. So in some sense, ideals correspond to a special collection of modules. They correspond to the modules of the four R over I, which you can think of as modules generated by a single element. So modules are a sort of generalization of ideals because you can turn every ideal into a module by taking the quotient. You can also turn it into a module by considering the ideal itself as a module. That usually turns out to be sort of less interesting. For example, if you've got the module NZ in the integers, this is just isomorphic to the integers as a module. But the module at Z over NZ is a more interesting thing you can do with it that isn't isomorphic to the integers. And we prefer generally these days, we prefer working with modules rather than ideals because modules are much more flexible. For example, suppose you've got a sub module M of another module N, then you can form a quotient module N over M. Because if you've got an ideal contained in another ideal, then the quotient J over I is not an ideal. So working with modules rather than ideals just makes it rather easier to work with things. Finally, I just finished by mentioning a sort of very close analogy between groups and rings. So if you've got a group, then groups can act on a set. Rings, on the other hand, can act on a module M. By the way, if you're doing non-commutative rings, then the theory of modules and ideals becomes a little bit more complicated because you can have left ideals and left modules with the ring acting on the left, or you can have right ideals and right modules with the ring acting on the right, or you can have two-sided ideals and modules. So this is like saying groups can act on things on the left or the right. But we don't need to worry about that because all our rings are going to be commutative. And then if you've got a group G, you can take the quotient of G by N where N is a normal subgroup. And similarly, we can take a quotient of R by an I where I is an ideal. So ideals are sort of analogous to normal subgroups. And if you've got a group G, you can turn it into a ring by forming the group ring. Sorry, it should be that way around. And if you've got a set acted on by G, then we can form a module which has a basis of the elements S as Z module. And this is fairly obviously a module over the group ring of G. So roughly speaking, an awful lot of constructions for groups have analogous constructions for rings. And you can think of rings as being in some sense a sort of generalization of groups. Okay, that's all for the quick review of rings ideals and modules. Next lecture I want to start with something a little bit more interesting and we're going to give some sort of historical introduction to commutative algebra by looking at some examples of invariant rings, which is in some sense where serious commutative algebra first started.