 system and reproduce the defensive rotations of the classical level. One of the things we found is that such operators that their parameterized axis angle form have the form of an exponential, which the axis and angle appear in the exponential. And the axis is dotted into a vector of operators J, which is the vector of emission operators, which have to satisfy the standard angular momentum connotation relations which I've listed here. In fact, this is our definition of the angular momentum in the system. It's in effect the generators and the rotations. This is the general definition of angular momentum that applies to any system. Now, the strategy for finding the unitary rotation operators that we're going to follow is to first look for the angular momentum. That's to say, the vector of emission operators that act in our system is satisfied by the angular momentum connotation relations. And if we find out that we can ex-financiate them like this to get unitary rotation operators, then we can explore their physical consequences to make sure that they make sense as rotations. We did this explicitly last hour in the case of the spin on half-system. We made a guess, basically, it was just a guess, that the angular momentum is h bar over 2 times sigma. That's a guess because it's a good guess because it causes the kind of desired connotation relations to be satisfied. From that, we can structure rotation operators for spin on half-systems which are written out here in the form of matrices that makes for some of their properties. Now, the strategy for today is to address this problem in more generality. That is to say, we're going to imagine that we have some fulverage space for some quantum mechanical system. We don't have to say very much about what it is right now, but we'll assume that there exists on this space a vector of emission operators, J, X, Y, and Z that would satisfy the standard angular momentum permutation relations. Then what we're going to do is to see what can be said about those operators. What can we say? I mean things like, what's the spectrum of various complete sets of commuting observables that can be constructed out of the angular momentum operators? What are the matrix elements of the rotation operators and issues of that sort? This is an approach towards treating the general problem of angular momentum for any system, not just spin line path, a special case that we discussed last time. Alright, so to sort of recapitulate again very briefly, we're going to focus on the angular momentum permutation relations for a vector of emission operators which we assume exists on some moment space and we're going to explore the consequences. At this point we don't say very much about what that moment space is. By the way, just one more point to make is that the angular momentum permutation relations, this is implicit in the material presented in the last couple of lectures. The angular momentum permutation relations really reflect the geometry of the collision space and that's why they're so general. That's why they apply it across the board to all kinds of systems, simple as well as much more complicated ones. This actually comes because they're a reflection of the permutation relations of infinitesimal rotations at the classical level. Alright. So, let's begin with this thing. So, just to repeat on this board, we'll have these basic commutation relations, J i, J j, commutator is i, epsilon i, J k, J sub k. And for the purpose of this lecture today, I'll set h-part equals 1 because it saves some writing. Angular momentum has dimensions of h-part so it's easy to restore the h-parts if you want to. Now, the first thing we're going to do is to define a new operator, which is J squared. This is defined in the usual way as the sum of the squares and the components of the lecture, J squared like this. And we'll notice that J squared is a non-negative definite operator because it's a sum of squares of permission operators. So, it's expectation value of respect to any state is non-negative. Alright. Next, let's notice the commutation relations between J squared and computers commutator with any of the components of J in answer to zero. This follows very easily from the J commutation relations. So, I won't prove you've probably seen it before anyway. But in the case, the point is, the J squared commutes with all three components of J. Now, the J vector operator here is essentially the only operator we're working with. So, one can say here that the J squared operator can use for everything in sight. And an operator that does this is usually called a Casimir operator. And so, sometimes I'll use that to refer to the operator J squared. It commutes with J vector as well as any function of J vector such as the rotation operators or other functions as well. Alright. So, it's definition of J squared and one of the commutators. Now, next, let me introduce some further operators. These are the racing and lowering operators J plus and minus. And they're defined as J1 plus and minus i, J2. Sometimes I'll write J1, J2, and J3 and sometimes Jx, Jy, and Jz depending on how I'm feeling. But that's what these are and the components of the angular momentum. Then one can work out various commutation relations and algebraic relations involving these racing and lowering operators. The most important commutation relation is this one, is that J3 commutator with J plus and minus is plus and minus J plus and minus. This easily follows from the commutator up here. And then there's various algebraic relations that are important and are of importance of which I'll mention a couple of them. J plus times J minus turns out as J squared minus J3 times J3 plus 1. And J plus times J minus is equal to J squared minus J3 times J3 minus 1. These follow easily from the definitions here and just working through the commutators. These are interesting operators because as you see, they are a non-negative definite operator because they're an operator multiplied times the submission constant and actually in two different orders so that we'll use this in just a moment. All right. Now, the... It turns out we can use these properties here to derive a great deal of information about the spectrum of certain operators. In particular, when we're going to talk about the spectrum operators and operators commute, in fact, we really want to think about complete sets of commuting observables. From the basic commutation relation here, we see that J squared, the chasm here, commutes with any of the components of J. However, the components of J don't commute with each other. So it's possible for simultaneous eigenstates of J squared in one of the components of J but not all three of them. We're not even in two of them, just one of them. Now, on Earth, the component of J that we use is J3 for this purpose. Probably on Mars they use Jx or something, but Jc is what we use here. So we're going to be thinking about a complete set of commuting observables that consist of J squared and Jz or J3, I'll call it J3 here, commuting observables. Now, one of the, this is really a complete center and I depend on representing other observables needed to resolve genocies and I'll come back to that in a moment. But right now, let's just notice that since these commute they possess simultaneous eigenstates. So let's write these eigenstates this way as I'll call it A, M, like this and explain what I mean. So the J squared ax on this brings out the eigenvalue of A. So A here just stands for the eigenvalue of the operator of J squared and J3 acts on AM and brings out the value of M. So here M is the notation for the eigenvalue of J3. When writing these equations down at this point we want to play dumb and pretend that we didn't learn anything in undergraduate courses. So we don't know anything about the eigenvalue the values of A or M except of course the fact that they have to be real because J squared and J3 are commission operators. But they could be positive, negative, subtractions, irrational numbers and so on as far as we know at this point. Let's adopt that point of view. There is however something else we can say right away which is that A is greater than or equal to zero and that's because J squared is a non-negative definite operator. So its eigenvalue has to be also non-negative. Now let's take the let's see a couple of comments here as I'm assuming for the sake of simplicity here that these states are non-degenerate. And this means if this were true it would mean that J squared and J3 would by themselves form a complete set. I'll come back later to the case in which they in which they in which these do not by themselves form a complete set in which case you need further operators to resolve the generacies. So let's take this assumption first. Let's also assume these states are normalized so the scalar product of AM with itself is equal to one. All right. Now we've been obtaining quite a bit of information about the allowed values of A and M in the relation amongst themselves by working with various identities over here. In particular let's start with these two involving J minus J plus and J plus times J minus. Let's say sandwich the state AM around this first equation here. So if I do this I can AM on the left and then we get J plus J minus the development of AM on the right. I wonder by the identity it's J squared minus J3 times J3 plus one. But J squared is going to bring out an eigenvalue of A and then J3 will bring out the eigenvalue of M. So this is the same thing as A minus M times M plus one. Now notice that this is also the square of a vector. Let's just say this is the vector J plus acting on JM that whole vector is squared because it's J minus time. I did this wrong. This should be J minus here. J minus J plus J minus J plus gives me the A minus M times M plus one. So this is going to be the state J M squared J minus is the commission conjugate of J plus. That follows from the definitions here since J1 and J2 are commission. If you complex, excuse me, if you commission conjugate either J plus or J minus it will just change the sign in the I's all. So they're commission conjugates one another. And this is a square vector and so it has to be greater than or equal to zero. And the result of this is that A is greater than M times M plus one. Now similarly if we take the second equation which is J plus J minus and form its expectation value this is another case of a non-negative definite operator. This is the same thing as A minus M times M minus one. And that's also greater than equal to zero for the same kind of a reason. And so we obtain the condition that A is greater than M times M minus one. So those are two conditions that are on the eigenvalues of these two operators J squared and J3. Is the J plus operator acting on the state Jm in that last squared expression? Here? No, right above. And all the way to the M. Yes, so I should have said AM, thank you. I'm jumping ahead because it's going to turn into two. Yes, thank you. All right. So there's some relations amongst the eigenvalues of these operators. We can combine these two equations, these two inequalities together in a single plane quality by saying that A has to be greater than equal to the maximum of the quantity M times M plus one and M times M minus one. And to understand this final this final inequality let me make a radical representation of some things. So let's make this a plot of where M is an independent variable that's both between plus one here and minus one here. If I plot the function M times M plus one, it's a parabola that goes through minus one and zero like this and goes on up. M times M minus one. M times M plus one. And if I take the function M times M minus one that's a parabola that goes between zero and one and goes on up. It's exactly similar to a parabola it's the same like this. So this is the function M times M minus one. But now if I want the maximum of those two functions I just take the two curves and whichever one is the higher and you get a point. So to do that I use my eraser in each case like this. And if I do this then let me kind of fix up the graph again. If I do this and you get a function like this that's been in two segments of a parabola I didn't draw too well but it's symmetrical about the axis here. So this is the function which is the maximum of those two quantities that are up there. Okay. Now A which is the eigenvalue of J squared as we've seen is a non-negative number so I can draw a line across here A like this. And you can see the region in which A is greater than the maximum of these two applies between these two points here like this. Let me bring about a line down like this. Let's go over here. Bring about a line down like this. So let's call this M value which occurs here. Let's just define this to be J. So this is the definition of J here. It's the maximum value of M if you're given value of A. And my symmetry just has to line down here as M equals minus J like this. Alright. Now you can see that A here is the maximum value of M is the function of A. So J is the function of A. And what exactly is the function? Well it's pretty easy to see because when M is equal to J the height of the curve is J times J plus 1. So that's the value of A. So what we get is that A is equal to J times J plus 1. And also let's see also let's notice since A is positive you can see graphically here since A is positive the value of J is going to be positive too. There's an A is 0 and the J is 0 that moves out. So J is the function of A which is 0. Now this motivates change of notation here. So here's what we'll do. Instead of using A against 4 which is the value of J squared we'll switch over to using lowercase J. So let's replace A by J times J plus 1 which is what it is. And let's replace this ket A M by you ket J M which is the same thing. It's a different notation for the same thing. Lately we've labeled the eigenvalue of J squared by the quantum number J and the eigenvalue itself is J times J plus 1. The eigenvalue is not always the same it's the quantum number. The quantum number labels the state of the eigenvalue as the function of the quantum number. All right, anyway, if we do this in the relations we've worked out so far a little like this is that J squared acts like J M times J plus 1 times J M and J 3 acts like J M and brings out value M times J M like this you still don't really know too much about the values. A lot of values of these quantities J and M look into that as we go but this is what our basic eigenvalue eigenstate equations look like. Now, one thing that follows however from this graph is that the allowed values of M group Y between minus J and plus J so we have an inequality here now that says that minus J is less than or equal to M is less than or equal to plus J. It doesn't say that either J or M has to have any we haven't said yet what values are allowed for J and M but at least these inequalities are satisfied. So we have tens of inequalities in the quantum numbers by playing around with relations here. Now the next thing to do is to look at the creation excuse me the raising and lowering operators. Let's take for example the let's take for example J, let's see let's see which one do I want to do let's take for example J plus let's take J plus and let's let it act on those eigenstates J M like this. Now if we take this J plus J M as a state in its own right and then let J 3 act on it we can use the commutator between J 3 and J plus which is up there in the board up at the top. And this turns into the same thing as J plus times J 3 plus J plus acting on J M and the J 3 is now adjacent to the J M so it brings out a value of M and the result is this is M plus 1 times J plus acting on J M like this. And so we have a little state that our theorem here if you want to call this really as a simple theorem it says this is that if the state J plus acting on J M is not 0 then that same state J plus acting on J M is an eigenstate of our two operators J squared and J 3 with eigenvalues which are J times J plus 1 and M plus 1 in other words we have to exclude the case of J plus M as 0 because then you just get 0 equal 0 here. So J plus acting on J M is non-zero then it's an eigenstate of J squared and J C and the J quantum number hasn't changed the same as it was before but the M quantum number has been raised by 1 which is why this is called J plus it's a raising operator alright now this little theorem here raises the question about when does J plus J M equal 0 so let's worry about that J plus J M equals 0 implies or it's if and only if you know a vector in quantum mechanics equals 0 if and only if it's squared is 0 so the square of this is the same thing as J M sandwiched around J minus J plus J M we work that out from the line above here it's A minus M times M plus 1 that's now written as J plus 1 minus M times M plus 1 which by the way can be factored into J minus M times J plus M plus 1 check the factor and so this that this is equal to 0 so if and only if this whole thing equals 0 if and only if we have either one of these two factors is equal to 0 so either M equals J or M equals minus J minus 1 on the other factor here however M cannot equal to minus J minus 1 because we just concluded right here that M must lie between minus J and plus J so this possibility is out and so the only way that J plus J M can vanish is J so let me repeat let me summarize the result and this is J plus acting on J M equals equal to 0 if and only if M equals J now similarly we will find if we do it the other way J minus acting on J M equals 0 if and only if M equals minus 1 right now from these properties it follows that the difference between M and J and also the difference between M and minus J must be an integer and so let's see I'll need close space for that so the idea is this you'll recognize that the basic logic here is very similar to what we did with Dirac's Algebraic treatment of the harmonic oscillator which is a little more complicated but many of the basic strategies have been saying actually as it turns out the angular momentum algebra can actually be mapped into a harmonic oscillator algebra so if you just knew a harmonic oscillator would be able to derive all of this this is actually discussed in Socrates' book it goes under the name of Schringer oscillator formalism for angular momentum I'm not going to go into your lecture on it or expect you to know anything about it but you should have read about it in Socrates' book well in any case to continue with the theme here today let me just say that let's see the next step is to yes so the next step is this let me draw a line let me draw two points on the line here one is where we have the widest J and one is where we have the plus J and again we don't ask the why somewhere in between the distance between M and plus J must be an integer which I'll call M1 and the reason for that is suppose the difference is not an integer between a given value of M and J then by using raising operators which are right here if you look at your theorem up there the theorem basically now says that if J is not equal to M then you can raise it with one raise with an eigenvalue an eigenvalue raised by one so that means that if M is not equal to J the way I drew it here then you can go to the integer step marching on up now if the difference between M and J is not an integer assume for the sake of argument that it's not then you can hop over J that's the only value where the theorem will hold and you can start generating states and that's not allowed by the inequality here so the only way to escape this is to assume the difference is an integer and then when you reach J and you apply another value of J plus instead then you want to get us to zero so the series terminates at that point so therefore the difference here must be an integer in one side as I say now by a very similar argument you show that the difference between minus J and there was another integer in two like this so the total distance between the minus J and plus J which is equal to 2J must be equal to the sum of two integers M1 plus M2 these are two non-negative integers so it has to be greater than even zero and the result is that if I divide by two is that J can only be an integer or a half integer so it can take on the value of zero one half, one three halves, two and so on like this so by playing the angular momentum commutation relations we effectively found the possible spectrum of the operator J squared in terms of its quantum numbers so integers are half integers non-negative integers are half integers this argument I just entered doesn't say in any particular application which of these integers are half integers J has to take take on it merely says that these are the only possibilities actually a better way to write this would be to make the set of integers and half integers to the state of J must belong to this set that's the way of saying it so just to give you an example if you take the simple system of the spin one half particle like an electron we talked about before then there is only one value of J which occurs which is J equals a half that's the spin of the electron on the other hand you take the case of a central force wave function in which J is identified with the orbital angular momentum the central force wave functions I hope you know look like this they have quantum numbers in on them and they tend to part fit in five three dimensional space the L here is like the J and the L can only take on integer values so in the case of orbital angular momentum you get integers they all occur and you get any half integers so like I say the J values which occur depend on the problem that's two observations of this but these are the only values that are allowed now if some J value occurs then as far as the N values are concerned they have to go from language J to plus J and they must do so in integer steps and so the N values that occur the N values that occur then range from minus J minus J plus 1 to plus J which is a total of 2J plus 1 values and if any one of these N values occur then they all of them must occur because you can create the other states all the rest of the states or any one of them by wind races and lowering operators alright so basically at this point we've determined everything quite a bit as a matter of fact and everything that can be determined from the angular momentum commutation relations about the spectrum of the operators J squared and JZ on Mars they would do this with J squared and JX and they get the same answers they would be the same the same values okay now just a bit about physical or geometrical interpretations of this the J is of course the quantum number of the operator J squared J squared is a square vector so it's a rotational variant the J squared is telling you about the magnitude of the angular momentum obviously whereas the M is the quantum number of the operator J3 or JZ it's the component of the angular minimum along the Z axis so M are the historically called magnetic quantum number it's the quantum number of the component of the angular momentum therefore it depends on the axis in question or to say it another way depends on the orientation of your system relative to axes but this is an important geometrical or physical distinction to make between these two quantum numbers one is independent of rotations or orientation and the other one gives information about the orientation of the system alright things that are invariant in rotations for example the energy of an isolated system depends on how it's oriented the energy must be independent of the magnetic quantum number M if you only depend on J now next let me say some things about the phase remissions for the raising and lowering operators that's here to start for the lowering operator let's make a list of states here let's take let me make a table like a JM here a value like this let's take the states of JM over here so there's a maximum value of J J and the next one down is J on the J minus 1 this comes down to J from the minus J these two J plus 1 states here so I'll just draw the states as levels like this let's take the state at which the M value has its maximum value let's call this the stretch state stretch state we're going to call this in anti-stretch states stretch in the opposite direction let's look at the JM minus here J minus to the state JM as we've seen this is proportional to the state J common M minus 1 it is an eigenstate of J squared and Jc with these 1 members but since we're assuming that there's synonyms no degeneracy here this has to be proportional to it so there's some proportionality factor let's call it K if we now square both sides of this equation we get JM sandwiched around J plus J minus and that's the absolute value of K squared times the square of that state which we assume is normalized on the other hand we worked out what this expectation value is a moment ago and that's the same thing as same thing as J plus M times J minus M plus 1 so K's magnitude can be determined by taking the square root of this quantity now what about the phase of J well, excuse me, the phase of this constant K well the phase of the constant K is related to the phase conventions for these states K has a non-trivial phase so you can absorb that into the phase convention for the state J common M minus 1 so let's do that in order to make the K constant real but if we do this then we obtain an equation like this the J minus acting on JM is the square root of J plus M times J minus M plus 1 which I hope is familiar to you from your undergraduate courses J common M minus 1 this is the basic lowering equation but what I want to point out is that it's not just that it's also a couple of other things it's actually now established as a phase convention for the state J common M minus 1 in terms of some given phase convention for the state JM another thing which it does is it makes the matrix of the operator J minus it makes them very real so if we use this rule for lowering states then and we take the stretch state and we just choose an arbitrary phase convention for that that this rule for lowering links the phase conventions of all the rest of the states to the one at the top any effect that now becomes just a single phase convention for this whole collection of two J plus 1 states and this is the standard thing which was done in plug mechanics so that we have as I said real matrix elements of the operator J minus perhaps you recall in the case of the harmonic oscillator we established an arbitrary phase convention for the ground state and then by using racial operators there was definite phase conventions for all the rest of them some dissimilar is happening here now once this phase convention has been established it turns out you don't have any option for the phase convention for the J plus operator it turns out it has to be this this square root of J minus M times J plus M plus 1 active of the state J on the rim of plus 1 this actually follows from the fact that J plus is the commission conjugate of J minus but in any case this means that both J plus and J minus have real matrix elements as you see here alright so that's the story of the raising of Lorela operators and their and their matrix elements now when I started this analysis of the eigenvalues and eigenvectors of these operators J squared and J z I said that for simplicity we would assume that these states of J m were not degenerate they let down a turn in the case of what to do when there is a degeneracy so if there is a degeneracy it means that if we take the operators J squared and J z which commute and we try to find their eigenstates we find that there is more than one linearly independent eigenstate that means that there is an eigenspace a simultaneous eigenspace of these operators let's say if the corresponding quantum numbers are J plus one of M let's call the eigenspace E J m or for these particular quantum numbers and if there is a degeneracy it means that the dimension of this space is some number which is general 3 to 0 let's call this number N J m just for the dimension of the space it's greater than one is what I meant to say but there is a degeneracy it's greater than one alright now does this ever happen? the answer is yes it happens all over the place I mentioned earlier the case of a simple force motion where you've got wave functions that look like this within L and M the L here stands for J is the general notation for L the orbital angle of M in this case so L and M stands for J and M and so if you took a simple force problem where you just diagonalized L squared and L c you won't find a unique state you'll find in fact an infinite collection of them that are labeled by the radial quantum number so in this case the N L M would be infinity for a problem like this well if you've got a degeneracy like this then in the general philosophy of constructing complete sets of commuting observables we need to find another observable that's more than one as a matter of fact to resolve the degeneracy this means that in these states here this E J m we need to introduce an additional index I'll call this an additional index of gamma J m like this to create a set of unique eigenvectors which are also eigenvectors of J squared and J c so in this notation in the gamma is an index that runs from one up to N J m I'm not going to care very much about in this discussion I'm not going to care very much about how gamma is chosen because this represents some where all the basis of states is chosen inside this eigenspace E J m well let's look at these spaces E J m again let me draw a diagram let's make this make a table here J m and then I'll draw E J m for the eigenvectors things like this so we'll start with the stretched space J J go to the next one J J minus 1 we call it the anti-stretched lines here which represent the eigenspaces that have some dimensionality the top of this dimensionality N J J which is some number now here's a convenient way of setting up these basis vectors here like this let's suppose for example N J J were equal to 5 just to have an example that means there's 5 vectors in this space let's just arbitrarily choose an orthonormal set of 5 vectors inside this stretched space E J J now let's take those 5 vectors and let's apply lowering operators J minus to each of them and if we do this this is going to create 5 more vectors in the next space down which is E J comma J minus 1 so the next space down the N J comma J minus 1 has to be big enough to hold 5 vectors you can show them later with independence greater than or equal to 5 there's a dimensionality that's facing the next one down let's suppose the number were actually 6 and that would mean there would be 6 vectors in this space if we take those 6 vectors and we raise them back up with J plus we get 6 linearly independent vectors in the space you just came from and we just said there was only 5 so what you can see is the dimensionality of this space now it has to be the same as the dimensionality space at the top so it's not just greater than 5 is equal to 5 all the way on down N J comma minus chain is also equal to 5 so the dimensionality of these spaces are all the same and in fact this is actually independent of N better to write this way this N J it depends all the way on the J value this number N J will call the multiplicity of the J value so again we go back to now I have it on the board here so again we go back to the case of the spin line path system there's only one J value which occurs is one path and it occurs only once so we've got N 1 path is equal to 1 and all the other way in the case of 0 in the case of the central force motion since or the angular momentum has to be a integer what we find is the multiplicity of integer angular momentum is infinity and we have a random quantum number whereas the multiplicity of the half integer angular momentum is 0 they don't occur at all in general you get different multiplicities I think I've got a little confused over the last one we're talking about the dimensions of D so J and what is that space exactly it's the eigen space of J squared and JC when eigen value is J K plus 1 in N and why would it be because J squared and JC might not form a complete set by themselves so if you find if you're diagonalizing and finding eigen vectors you make it more than one linearly independent solution and if they do form a complete set by that if they do form a complete set by themselves then the multiplicity is just 1 that would happen with the spin system where you just have one J value so in the spin system where you just have one J value and all the Ns tend to be equal but then you also said that you have for one half it's 1 and for the rest it's 0 well the first one next is the J and the second one is the N value so this is the stretched one this is the N stretched one so the argument I went through here shows that the order of the degeneracy of the number of linearly independent eigenvectors in a dJM space is actually independent for all the N values this is a way of saying that it doesn't depend on the orientation of the system the multiplicity is something that's independent of orientation alright so anyway that's the multiplicity of the J values so these are states these vectors gather Jm here's an algorithm for constructing them in principle anyway what we do is we find the simultaneous eigenstates of J squared and Jz for the stretched case in which the eigenvalue of N is equal to plus J at some space it's EjJ and it has some some convention NjJ like that so let's call the so then what we do is inside this space and perhaps arbitrary way we choose a more normal basis of this is going to be Nj now just a single J because it doesn't depend on the N value and perhaps an arbitrary way an ordinal basis of vectors in that space let's label those vectors by this third index gamut so we get vectors like this Gamma Jj for Gamma runs from 1 up to tens of J then what we do is we apply lowering operators to them to lower the N value down which is Gamma J times the J-1 and then we lower this all the way down to Gamma J times the minus J like this and it doesn't take us back up again in other words by this construction the Gamma index doesn't change when we apply raising and lowering operators neither does the J index only the M index changes this is a standard way of doing this and if we do this we end up then with what I'll call the standard negative momentum basis the standard negative momentum basis there's a basis that looks like this Gamma J in which Gamma runs from 1 up to tens of J which is the multiplicity of the J value in which J itself belongs to the set 0, 1, 1, 1, etc and the M value runs from minus J plus J in interest steps and so this is one of the main points of this previous this immediately preceding presentation that I just went through which is the construction of the standard angular momentum basis standard angular basis exists for any system upon which rotation operators act so that means any physical system which is isolated because in isolated system the energy doesn't depend on orientation and rotation operators act on them and give you well actually the system doesn't even have to be isolated the basis exists even if it's interacting the point is that the standard angular basis exists whenever you've got a system upon which rotation operators act because then you have an angular momentum of a vector of operators and the rest of it just follows from the commutation so this applies for example to spin systems to a several force problem but also to much more complicated problems like an aluminum atom of 13 electrons uranium nucleus of 238 protons and neutrons there is a standard angular momentum yes just a label of 1 through 5 it's a label of an arbitrarily chosen basis in the stretched eigen space I don't know if you had why do you know that you only need one more convenience or the board yes okay I understand you perhaps not you might need more than one convenience or well in that case gamma would represent a collection of indices for other observables but before I introduce other observables I can certainly say we can't argue with this it's certainly possible just to choose a basis an orbital basis in this subspace and label the basis vectors by gamma now it may be convenient to make those basis vectors eigenvectors with further sets of operators for example if you include if you include the orbital and spin degrees of creative and the hydrogen atom then you have a set of problem numbers which are J L M let's see it's N J by N J L and M sub J actually I should put it in a different order let's write this way it's N L J and M sub J so J is the total angular momentum of the orbital plus spin the M sub J runs from minus J and plus J and this pair of indices it's like the gamma that I'm talking about here it's all the rest of the indices that are necessary to specify a unique vector once the J and N J have been given does that help this? no okay so these are our standard eigenvectors now with studying the angular momentum operators it's particularly with other operators too as we'll see but when studying the angular momentum operators it's particularly convenient to use the standard angular momentum basis for example if we take the operator take the operator J square let's let it act on N J M N J M by definition is an eigenstate of J squared with eigenvalue J K plus 1 and so if I multiply through on the underside by another of the bra with another vector of the standard basis I get some maintenance elements look like this let's call it gamma prime J prime N prime by the way I didn't mention this but I should say this these states that the standard angular momentum basis are quite normal so if I take gamma J M times gamma J M once with primes let's say in one side like this it's just a bunch of primes J J prime and then gamma prime and so if I want to line up here if I now multiply through by the bra gamma prime J prime N prime I'll get matrix solvus gamma prime J prime N prime on the left J squared in the center gamma J M on the right and that's equal to the eigenvalue of J which is J J plus 1 times that same list of product or deltas gamma gamma prime J J prime with primes first similarly because these states are by their construction or eigenstates of JZ we have matrix solvus gamma prime J prime N prime with I'll call it J3 in the middle gamma J M it's down to this equal to M times the same product or deltas gamma prime gamma J prime J N prime M now what about let's say a raising operator J plus J plus acts on gamma J M on the right and it brings out that famous square root it doesn't change gamma because that's how we constructed these states by starting with the stretch states and lowering them by using J plus and minus so this is equal to our famous square root which in this case is J minus J plus M plus 1 I'll call it a famous square root because I think most undergraduate courses actually were obliged to memorize these things so I'm assuming they're familiar with them but anyway this is going to change the N value and M plus 1 so we get chronic the deltas now that look like this it's still diagonal in the gammas and it's still diagonal in the Js but now it's delta M prime comma M plus 1 because M prime is being made as a product of the Kansas state an overall state that has M plus 1 magnetic quantum number and as far as J minus is concerned those matrix elements those fall easily because it's the other square root that you get in minus 1 here or the fact that J minus is the emission conjugate of J plus so it's matrices you just take the transpose the transpose complex conjugate so it's matrix elements are real so it's just the transpose in any case once you've got these matrix elements you can now find the matrix elements of Jx and Jy because J1 is the same thing as J plus plus J minus over 2 and J2 is J plus minus J minus over 2i let me call it x and y instead of 1 and 2 because it's an important lesson containing this by our conventions the matrix elements of J plus and J minus are real so you see the matrix elements of Jx are real but you see the matrix elements of Jy are really imaginary because of the i in the denominator and this is a result of these space conventions of your development so these are some examples of matrix elements of of angular momentum operations this general structure applies not only for as you see here with this there's Jx, Jy and J3 which is the same as Jz we've got the matrix elements of the three components of J also we've got matrix elements of J plus and minus and J2 there's other functions of J that are also interesting namely the rotation operators you have an impact on the theta the rotation operator, this is an exponential of the mean to the minus i theta times in hat of J I'm still continuing with H1 but in any case this is an interesting function of J and so the matrix elements of the rotation operators with respect to the standard angular momentum basis they only have the same structure they're going to be diagonal and gamma they're going to be diagonal with J the matrix elements will not depend on gamma they will only depend on J and M as you see here J and M and this depends on they may depend on M and M prime but it won't depend on J but they're strictly diagonal of both gamma and J so let me just define the standard notation for these matrix elements that you go in the standard angular momentum basis what you write it like this is gamma Jm on one side U of impact on the theta rotation operator in the middle and then gamma prime J prime M prime on the other side for no good reason I switch the primes and unprimes here but in any case this is a matrix and what we know is that it's diagonal in the gammas in the gamma prime and what's left over is the essence of the matrix which is given a standard notation namely D it's written D with a J superscript and M M prime subscript that's the two sides here and M M prime like this are the same order that appears here and it's parameterized in the same axis and angle so the point of this is is that the D matrix is a specific matrix in numbers which represents the angular momentum operator in the standard angular momentum basis and we were doing a spin on half systems I was being sloppy and confusing the operator with the matrix but the more general more careful notation one draws this distinction the D here is a comes from the German which means a rotation so the whole terminology goes way back so alright so next time we look explicitly at what these D matrices are you see we know what the D matrices already are because they're equal to half because we did that in the last lecture but next time we look at these these are like the other values that you have that's all there's some homework to share