 It began today just by reviewing for you the content of the opinion record there. But I see I have just one piece of piece of talk today. So let me just go over what it means. The opinion record concerns the matrix elements, the very reasonable tensor operators, TKQ, between two basis vectors of the standard angular momentum basis. It says that the dependence of the matrix element of the magnetic quantum numbers, of which there are three, m prime q and m, is captured by a Fletch-Gordon coefficient that you see over here, with m prime q and m, the quantum numbers same for fairly over here. When I say it's captured by the Fletch-Gordon coefficient, I mean the original matrix element is proportional to the Fletch-Gordon coefficient and the proportionality factor is something that does not depend on those magnetic quantum numbers. In general, it depends on everything else, however. It depends on the gambas and the J's and which tensor operator it is and the K value. This proportionality coefficient is conventional to denote it as I've indicated up here. It's called the reduced matrix element. It's got this double bars that indicate that it's not a ordinary matrix element. But basically it's just a way of listing everything else that the magnetic could depend on. And the definition of the reduced matrix element is that it's just the coefficient or proportionality factor that makes this equation work. Now last time I explained the two main applications of the bigger element right there. One is for selection rules and the other is to actually calculate a lot of these matrix elements within the magnetic quantum numbers over here of variables. I won't go through that again, but that's what it's used for. As far as selection rules are concerned, it's actually very easy to use the bigger element right there. All right. Now I'm not going to prove the bigger element right there. Mainly because it takes too long to write it all out. The proof is in the notes. I'll try to give you at least a general idea of what's involved in the proof. To get the general idea, let's first suppress this prime bra which is over here. And just look at this very useful, tens-properly accurate one in this case. So we've got T k q times the average a m. So we can describe this product as being an operator that has certain angular momentum quantum numbers, multiply times the ket to best certain angular momentum quantum numbers, namely k q on this side and j on the other side. If we let the magnetic quantum numbers be variable, the q would range from minus k up to plus k, and the n would range from minus j up to plus j. And so you'd end up with a total of 2k plus 1 times 2j plus 1 ket, that's if you did that. Now this is actually very similar to what we do when we're studying clutch-Gordon coefficients, when we combine angular momentum, for example, orbital and spin. So imagine, I mean, just be specific, suppose this j is like a spin and the k is like an orbital or vice versa. It doesn't matter. Then in that case, you might consider a product that would look like this. You'd have a k q-stake that would represent, let's say, an orbital, and a j m-stake that would represent spin. 2k is not a product that can operate a times a ket, but with the same quantum numbers. And again, you'd have the same number if you vary the magnetic quantum numbers, you'd have the same number of total states. And those total states that you get this way constitutes what are called the uncoupled basis in the process of coupling angular momentum. As we know, you can transform this into a couple basis, which would be eigenstates of total J squared and J z, and the expansion coefficients of the couple basis in terms of the uncoupled basis are the clutch burden coefficients. In fact, they're exactly the same as clutch burden coefficients in the Bureau of Europe. And so the idea is that, in a sense, the total angular momentum content, I mean, this is a set of vectors here, if I vary the magnetic numbers. This is a set of vectors in our Hilbert space. And one can think about expanding this as a linear combination of, in terms of angular momentum eigenstates, a standard angular momentum basis. If you wanted to pick out the coefficient of one of those basis vectors, you'd take the scalar product with exactly this broad here, so that J prime and M prime would play the role of the total angular momentum. That's just what the robot plays over here. And so the idea is that this set of states with this set, when they make magnetic quantum numbers run over their allowed values, this set of kets produced in this way has the same transformation properties under angular momentum rotations and so on. As a set of kets you get if I just multiply kets. Multiplying an operator times a ket is not so different from multiplying a ket times a ket. By the way, multiplying two operators, the irregissible tensor operators, is a very similar story. If I have a, let's say, a Tk1q1 times a Tk2q2, if you want an example of this, think of a product of the position operator. Remember that position operator is a k equals one irregissible tensor operator if you put it into the spherical basis. It's what we call RQ. Well, maybe I have that in RQ times an RQ prime. This is a tensor, like an xi, xj, but in a spherical basis. Exactly this thing occurs in the quadruple moment tensor, which I'll say something about later in the hour. Anyway, the point is that this product of operators, again, has the same angular momentum decomposition as a product of kets or a ket times an operator. The rules work in the same way in terms of angular momentum combined. And that's the basic idea that lies behind the proof of the theorem. It's just to exploit these facts. All right. There is one tricky thing, however, which is that looking at this, looking at these kets here, or that part of the kets here, which is the same, you'd think that what you'd be doing is probably k times j like this. But actually, if you look in the question of the other way around, it's j times k, which is being done here. The fact that these are swapped is purely a matter of convention, and it has nothing to do with the essential ideas, but it just makes it slightly harder to remember. Do you ever remember to do this in the right order? This could have been defined the other way around, and it would still be valid, but this is the conventional way of doing it. All right. Now, so that's the review of the ring of that card theorem, and at least some of the ideas involved with the proof. Now, the next thing I'd like to turn to is a special case of scalar operators. If k is a scalar operator, we know that it commutes with all rotations, but it's also an irreducible tensor operator with the values k equals 0 and q equals 0. There's only one value of q and k. And if we consider the major solves of the scalar operator in a standard angular momentum basis with the primes on the left side, the scalar operator in the middle of the unprime basis vector on the right side like this, the ring of that card theorem tells us that there's first of all reduced matrix settlement, which is gamma prime j prime double bond k double r gamma j times the Klebsch-Gordon coefficient, which I'll fill in at the moment, since the Klebsch-Gordon coefficient, since k and q are equal to 0 here, it turns into, it turns into j prime m prime, a scalar product with j 0 m 0 like this. In other words, the one what it's doing here is taking j across to 0, j for the space that the k is acting on, and 0 for the k itself. Well, if you combine an angular momentum with some value of spin, for example, with another angular momentum, which is 0, adding 0 to an angular momentum doesn't change it at all. In other words, if a product like this is trivial, it means that this Klebsch-Gordon coefficient is the new j. It's the same as the old j. There's only one value that comes out of this. And so this is actually a prime for delta j prime for delta j prime for delta j prime at any time. That's what the Klebsch-Gordon coefficient is. So the goal of this end of the theorem is to find delta j state prime delta m m prime like this. So I'm going to apply to scalars. The thing that I'm trying to say is that the matrix element of a scalar operator, I'll remind you that the most important scalar operators are Hamiltonians for isolated systems, of which we'll see quite a few in the course. The matrix elements of the scalar operator between basis vectors and the standard angular momentum basis are in the first place, they're diagonal in the j quantum number. Secondly, they're diagonal in the magnetic quantum number. And thirdly, the matrix elements themselves that are left over, which are on the diagonal, are actually independent of the magnetic quantum in the gamma. They do depend on j, as you see. j must equal j prime, and this will be zero. But they depend on j. They depend on gamma and gamma prime. They're not diagonal and gamma. There's no reason why they should be diagonal and gamma. Gamma is just an extra index that is introduced to resolve the degeneracies of j squared and jz. And in general, they have no relation to the operator k, for example, in Hamiltonian. So there's no reason why it should be diagonal and gamma. But it is got diagonal j's and m's, and as I say, it's independent of the magnetic quantum number, m. If you like, you can do this as a matrix in the magnetic quantum numbers. The same thing as proportional to chronic or delta means the matrix is diagonal. However, that doesn't mean the same matrix is diagonal, it doesn't mean that all the diagonal elements are equal. But that, in fact, is what happens here. All the diagonal elements are equal. So, in fact, it's a multiple of the identity matrix, as far as the magnetic quantum numbers are concerned. Now, physically, that means, for Hamiltonians, that means the energy doesn't depend on the orientation of the system. It's an easy fact to understand. But this is how it comes out. All right. Now, these results can be understood more simply from another argument, which I like to give. It doesn't go through all the big record term. What suppose we've got a scalar operator? We know that it commutes with jz and also with j squared. Suppose we're given an operator k, which has these commutational relations like this. Then let's work on the jz first. Let's say the sandwich of both sides of this jz commutator between the two basis vectors of the standard angle and the basis. And this thing gives us the prime gamma jm on one side. And then the commutator all right on its jz times k minus k times jz. And the other, then, we've got the unprime gamma jm on the other side. And this is equal to 0 because we've got 0 over to the right-hand side. Now, here we can let this one jz act to the left and the other jz act to the right. And the result is that we get m prime minus m times the matrix of prime gamma jm on one side k to the middle and then gamma jm on the other side. This is equal to 0. So what you see by just playing with the commutator is that either m equals m prime or else the matrix element is equal to 0. That's equivalent to saying that the matrix element is proportional to the particle delta. Likewise, if you did the similar game with j squared, you'd find the matrix element is diagonal and also the j component of the same argument. We'll see this argument again later on. However, these arguments by themselves don't tell you that the matrix element is independent of the magnetic component. That takes an extra argument, and in fact, it doesn't even follow from just these two commutation relations. It really requires a full rotational of variance of k that's became less commute not only with these two operators, but also with jx and jy. It doesn't appear just here. That means it commutes with all rotations. Well, in particular, if it commutes with jx and jy, it commutes with a raise in the Lorentz operators. So let me show you the effect of that. Let's see. Let's see here. Oh, I want to do that. Actually, I think, let's see. Yeah. Maybe what I'll do is, okay, here's what I want to do. Let's leave this discussion with general scalar operators. It would this for now, and just say that the bigger record there makes a stronger statement than just these two commutator arguments because it says that the matrix elements are independent of something else. If the Hamiltonian, I'll make this through the remark, but if the k were a Hamiltonian and you wanted to find energy eigenvalues, then what you needed is to diagonalize this matrix and the, since it's not in j, sorry, diagonal j, you need to diagonalize it with the gammas. Again, we've got some arbitrary basis. And if you do, you'll find the energy eigenvalues. But the result will be that the energy eigenvalues won't be dependent on the magnetic quantum numbers. Again, this is physically the independence of the fact that the energy is independent of the orientation. Now, we've seen a special case of this already in central force motion where the energy eigenfunctions have the importance of NL like this. And we derive this by working through the radial Schrodinger equation. What I propose to show you now is that a very similar conclusion applies to any isolated system, not just the central force problem. It could make it much more complicated. You could add spin orbit effects and other optimistic effects. You could include the magnetic interactions with the nucleus, which is the hyperfine effects. You could make it a multi-particle system, so you've got more than one electron. You could include the dynamics of the nucleus, so it's protons and neutrons that interact. You could make a very complicated problem. The conclusion still holds is that you can write the energy levels in terms of an angle of the quantum number and another one that labels the energies. But they don't depend on the magnetic quantum number. And that's a general conclusion. So now I'll show you how that general conclusion follows from the theory of development. So it works like this. If we have I'll speak up in Hamiltonian because that's the most important application. We will assume it's a scalar, so it can use all rotations in its isolated system, but possibly very complicated for it. That means in particular it can use with all three components of J. And as a result, it can use also J squared and JC. And so we get a collection of three operators on any isolated system, on any complexity, which communicate with each other. And one of the ways to find those eigenstates, at least in principle, is to start by diagonalizing J squared and JC with what we call the quantum numbers J and M. This is the total eigenvalue for the system in a total JC. Let's call the eigenspace of J squared and JC with eigenvalues J and M. Let's call it EJM, like this. This is in general multi-dimensional. In fact, usually it's infinite-dimensional. This means the total element space is decomposed into orthogonal subspaces, each of which is characterized by the JM values. We write it this way as the sum of J and M where it's really a direct sum, but not an ordinary sum. This direct sum just means orthogonal subspaces. Then to find the energy eigenvalues, what we do is we just diagonalize the Hamiltonian in each of these subspaces. And if you do that, you're going to get some energy spectrum. Different energy spectrum in general Now, for simplicity, let's suppose that spectrum is discreet. Moreover, let's assume that it's non-generate. That's usually true, but let's make this an assumption. So then we can label the energy eigenvalues on a JM subspace by another quantum number called N goes 1, 2, 3. It just counts them and asks in the order. That's all it does. This means that the energy eigenspace then can be labeled by the three quantum numbers so that H acts on NJM brings out an energy and the energy depends on NJM and acts on a bulk-like NJM. The reason the energy depends on NJM is because we're diagonalizing the Hamiltonian in each of these subspaces separately and N is just the sequence number in that subspace. So based on everything I said so far, there's no reason why the energy shouldn't depend on it. However, I'm now going to show you that if you have full rotational invariance depending on it, it works this way. Let's apply a low-ring operator to this equation. Let's apply a J minus H NJM on the left and if H can use with all three components of J, then I can compute these two these two through and put the H on the left and the J minus on the right. J minus will act on the NJM and so it will bring out a square root, the 10 square root which will be J plus M times J minus N plus 1 and then we've got H adjacent to NJM like this that did somebody steal the eraser here? Let's see looks like they did. I can't believe I'm the eraser. Excuse me, I'll find the eraser. Okay. If you allow me to use my eraser, H has some NJM which brings up the eigenvalue which is the energy that we're assuming NJM and in multiple... excuse me let me go back a little bit. We run through it again. J can use with H and apply J minus to the left side of this equation I'll compute the J minus through the H let it act on NJM that will lower the value of M so I've got NJ comma M minus 1 this is being acted on by H so that brings out an energy NJ comma M minus 1 here we're having an open mind and allowing for the energy to depend on the magnetic quantum memory so that's J minus acting on the left-hand side now letting J minus acting on the right-hand side the energy is just the number so the J minus just goes into the ket and what we get is B NJM times the same square root times the same ket which is lowered by one magnetic quantum memory minus one and so if you look at these two equations the square roots cancel and the kets are the same and the only thing that's different are the energy values so what you see is that E NJ comma M minus 1 is equal to E NJM the result is is that the energies are the same for M N minus 1 in fact by induction it shows that all the energies don't depend on them so go back to this original statement here we can drop to him and you see we have a generalization of the fact that we learned for central force problems this is an opinion isolated system the energy levels are characterized by an angular magnetic quantum number and possibly generally other quantum numbers but they don't depend on the magnetic quantum number now what that means is that even if we make a very complicated system it means that the energy levels what you call the energy levels are actually the degenerate energy eigenspaces characterized by an angular momentum and they have a dimension of 2J plus 1 in other words there's a degeneracy this is the degeneracy of 2J plus 1 full of degeneracies that arise on the basis of rotational variance alright let me give you an example of this actually if you get an example it's just a hydrogen atom on the other side of the force problems but I'll give you a much more complicated example just so you can see how the idea carries through the more complicated systems let's talk about the higher 57 nucleus higher I think is 26 it's not as easy as 26 and so that means the number of neutrons is equal to 31 and iron 37 so this is a 57 body problem in nuclear physics with protons and neutrons and people don't even understand the forces very well so the Hamiltonian is complicated too what can we say about it well what we can say about it is is that the energy levels are going to be characterized by an angular momentum quantum number what we call the spin of the energy level but it's really the spin the sum of the ordinal and spin angular momentum of all the constituent particles in the nucleus and that energy level I call the spin I I is the usual number for the energy level will be 2I plus 24 will be generated and in particular this has got to be true for the ground state of the nucleus so here's the ground state of the iron 57 nucleus we just call it iron 57 like this and this actually has an I equals 1 half level iron 57 is used in the moss power effect and I'm not going to explain the moss power effect in detail it's not my purpose here let me give you at least some background on it there are some other levels here there's one level up here which has an excited state of the iron 57 and as I equals 3 has this is called let's call this 57 iron 57 iron let's call this with a single star like this the energy difference between here is about 14 kiloelectron volts there's another excited state let's call it 57 iron double star instead of a higher energy it makes transitions both down to the ground state and also down to this intermediate state all of these nuclei are in stable ground state the way one normally produces is iron 57 double star if you get a further decay up here coming from cobalt 57 and so if you have a sample of cobalt 57 and in other words this decay produces some for each period of time these excited states as it rattles down to the ground state it's this 14K transition it's the most important one in the case of the moss power effect this is a doubly excited state by the way let's see here I'll fill this in the first star in state has an I equals 3 has and the doubly double star in state has an I equals 5 has so these are the spins as we would say in these different states now if you think of the iron 57 nucleus as a unibody problem what's indicated there's just three balance states in the system but there's other balance states up there the huge enormous number of balance states the spectrum of nuclei I'm just talking about the balance states can be extremely complicated for a heavier nuclei and above that there's own balance states there's a continuum in which the protons and neutrons are separated so that's the Hilbert space for the iron 57 nucleus and it's an enormous space it has way functions of many variables nevertheless the ground state is a state that actually just consists of two different states there's the energy of the ground state there's the I the basis factors that span the ground state could be labeled by this if it's understood that I'm only talking about the ground state I don't need any other labels in it if I want to have another label in it I can put it in here so this is named the N equals 1 we call that N equals 2 but if we're only talking about the ground state we're going to suppress the N and just talk about the I M I now when we talk about when we talk about magnetic resonance and spins and magnetic fields we consider the spinning Hilbert space which we call the spin which was just basically the span of the basis factors looked like this I M I where I runs from minus I excuse me where M I runs from minus I plus I I think we call this S instead of I but it's conventional in nuclear physics to use I for this nuclear spin it just means the spin of a particle and this is a finite dimensional space and on this space we we wrote we wrote the magnetic moment as proportional to the spin operator with proportionality factors and then we took it from there minus U dot B we've noticed what we're doing so far as the iron nucleus is concerned is we're really only considering a very small subspace of the entire Hilbert space what allows us to neglect all the rest of this the answer is is that the energy differences are rather large the U dot B interaction of an iron 57 nucleus and any reasonable magnetic field is a very small energy compared to this 14 KV so you might as well just ignore this because this is the only subspace of the nuclear Hilbert space that's in effect what we did and this by the way is the explanation why for at least for composite particles like nuclei it's the reason why in dealing with magnetic resonance kinds of problems why we can just assume the Hilbert space consists of just a single irreducible subspace and the rotations characterized by a single incremental polynomial it's because it's the ground state of the system and because all the energy levels including the ground state are characterized by an angular momentum quantum number and they've got a 2 J plus 1 or i is the notation in this case degeneracy, pole degeneracy okay so those are some of the lessons that come out of this alright alright this fact that the standard changes I'm going from the 3 half to the 5 half this transition is an electromagnetic transition it means in the midst of photon and the photon carries off as you see one angular momentum one unit of angular momentum it's very similar to the bipolar transitions in atomic physics alright alright so this is some of the lessons that come from the fact that the Hamiltonian is a scalar operator okay now the next thing I'd like to discuss today is a somewhat different topic this is going to come up in this week's homework so I'm going to give you the preparation for it I want to talk about quadrupole moment operators so let me begin by reviewing some things from classical electric magnetism let's suppose we have a charge distribution which is localized let me call it rho r it's a charge density let's put a coordinate system x, y, z here somewhere in at the end or near the charge distribution let's denote positions inside the charge distribution by a prime position vector and I can call it r prime so in fact I'll call this rho r prime here like this similar out here we have a field point where we want to measure the potential it's called this r we want to measure the potential of the charge distribution of the field point so we see let's call it capital phi it's a potential capital phi r well by electrostatics this is the integral of the source distribution between r prime times the coolant denominator which is one of the distance between the source and the field point like this and if you like SI units you can put it in the ugly 4 pi s or not so I'll leave them out now the way I set this up we'll imagine the charge distribution is localized and the field point is far away from it so the magnitude of r is much greater than the magnitude of r prime r prime is the vertical integration but it's bounded by the dimension of the charge distribution so that means we can expand this coolant denominator in powers of r prime over r and if you do this here's the expansion you get the first term is just 1 over r and the next term is this it's r vector over r cube dotted into r prime vector the next term is this it's the factor of 1 over 2 r to the fifth this is sum on i and j we call it x i x j x i x j let's say r vector here is x 1 x 2 x 3 let's say r vector prime is the prime version of that x 1 prime x 2 prime x 3 prime so it's the sum on i j of x i x j and then this is the whole point x 3 x prime i x prime j minus delta i j times r prime squared and then this is the whole point x 3 squared and through the second order this is the expansion of the school on the denominator you get it just by straight for a Taylor series I won't do the algebra now this is under an integral so we integrate over r prime we need to do the r prime integral for the first term you just get an integral here's the integrals you get for the first term you get integral d cube r prime row of r prime that's just the total charge which I'll call the lowercase q you get d cube r prime row of r prime multiply times this r prime vector which appears here that's the charge weighted position vector which is also called diphenon vector I'll call it d that's just characteristic of the charge distribution second order you need to do the prime integral over this quantity of parentheses you get integral d cube r prime of row of r prime multiply times 3x prime ix prime j divide as delta ij times r prime square and this is conventionally called the quadruple moment tensor and it's called q ij right? now so I'll do the integral again what we get is this is 5r the field point is equal to the total charge q divided by the radius that's just the obvious coulomb term then we get the dipole moment dotted into our vector over r cube then we get 1 half 1 over r to the fifth times someone i and j x i and j times the quadruple moment tensor q ij that's your second order that's what it is now about this tensor p y j this is obviously a symmetric tensor and it's also traceless the tracelessness just comes from doing a Taylor series expansion of the coulomb denominator it's not an accident but it just comes out if you do it to say that it's traceless means that if I sum on i and j I get 0 so if I sum on i and j this term here we get 3 times r prime square because you say i equals j at sum if you sum the second term on i and j well that means you've taken the trace of the identity matrix which is 3 so it's minus 3 r prime square and we've solved it in 0 q ij is a symmetric and traceless tensor we also explained earlier that's actually a k equals 2 well in classical E&M this is not an operator so I can't really speak a very useful tensor but it corresponds to the k equals 2 or a 2 angular momentum a 1 in the angular momentum of 2 there's 5 independent components in a symmetric and traceless tensor and by doing linear combinations they can be put into the form of q value running from minus 2 up to plus 2 alright this is the Cartesian form of the tensor rather than the spherical form alright anyway this is the q ij now there's an interesting thing you can do with this expression for the potential when you move the q ij overload it makes space it allows you to multiply it so I put a 3 here and this becomes a 6 down here if I do that the interesting thing you can do is you can take this 3 xi xj and you can set it off the trace of that which is delta ij times r square and put it next to the term in there so it doesn't change the answer it doesn't change the answer because this delta ij term contracted against q gives the trace of q which is 0 so actually this term doesn't do anything do the answer, it doesn't change it however what it does do is it shows the second order of contribution to the potential and now it involves a complete contraction of one k equals to tensor with another k equals to tensor both are the same for both symmetric and faceless this would involve the source point this would involve the field point anyway so this is one of the ways that the, this is the way that the particle moment tensor arises from the classical now here's where it gets applied in for example nuclear physics again let's suppose we've got a localized charge distribution here I'm really thinking about the nucleus there's an ordinary system here x, y, and z and let's take a field let's take a point here r, r, and a charge distribution I won't try it at this time let's suppose we have also some potential coming from external charges somewhere out somewhere else there's potential by external now on this blackboard I'm just going to call this phi r because I get targeted by the external but this is not the same phi as in the blackboard above the blackboard above the phi is the potential produced by the charge distribution here it's an external potential which is interacting with the charge distribution the nuclear physics, the idea is that by external it comes from the electrons that are elsewhere and the rho of r is the nucleus alright so what we would like to find is the energy of interaction from the nucleus in the presence of this external electric field so to do this the energy in classical theory of a charge distribution in an external potential it's called a w it's the integral over space and the charge density rho of r times the potential that's all it is so what we do is to put this in more convenient form is to expand the potential about the origin if you think of the nucleus the range of the variable r is actually very small and so it's reasonable to expand the external potential presumably slowly varying over the extent of the nucleus around the center of the nucleus of some origin of the nucleus and if we do this and we just make this a Taylor series this becomes phi 0 plus the radius r dotted in the radius of the phi plus one half of right of this way sum of i and j of x by x j times the second derivative of the external potential with respect to x by x j evaluated at zero and I'll stop there because you can keep on going obviously and if we do this and we see the the integrals over the charge density can be done we first of all have just the total charge q itself so w is equal to q times phi evaluated at zero that's the first term as far as the second term we go we should have the charge weighted radius r that's the dipole moment so it becomes plus the dipole moment d dotted into the gradient of phi evaluated in the origin there's a quadrupole term we'll come to in a moment before I do that allow me to observe that the gradient of phi is of course minus the electric field evaluated at the origin and if I write this term in slightly different forms minus d dotted into the electric field this is the external electric field evaluated at the origin and now you see the energy of the dipole in the external field so now it's the electric dipole which minus d dotted into d as far as the second word of the term goes let me take this this expression before I do the integrals let me take this expression and massage it a little bit so I have three so I have one six of three x i x j let's also use the fact that we'll assume the charges are far away so the sources of the external potential are way too nucleus if that's the case I mean the plus is equationally the plus theta phi is zero and so that means that this tensor is actually traceless because the gradient is the trace of this tensor and so in this form here where I type the second derivative I can subtract off the trace with a one third factor delta i j times let's write it in positive five because that term is zero anyway but the result is that I have symmetric and traceless tensor now having done that I can use the same trick as before I won't find this by three x i x j since the thing in the square of that track is a symmetric and traceless I can subtract off the trace of this term so to make a long story short this whole thing if you look in this way is one sixth sum on i of j of three x i x j minus delta i j r squared times the whole thing in the square of the brackets and the point of this is is that I'm now going to integral over the density but it makes its appearance the same particle tensor that appeared in the other problem and so the net effect of this is the energy of interaction of the nucleus for the external field is q times five evaluated in the position of the nucleus minus the dot for the moment factor dotted in the electric field evaluated in the center of the nucleus plus one sixth of the dot for the moment the tensor of the nucleus multiplied times the second derivative potential evaluated in x i x j with respect to x i x j evaluated in zero like this so this becomes the expression for the energy of the nucleus and the presence of the external field this is almost all I'm going to say about this for now it's just a setup for a homework problem but I might add just one more thing which is that it can be shown in the example model of the nucleus vanishes and this comes from the conservation of parity and time reversal which we'll talk about later on but as a result this time disappears and this is why the particle moment term is so important because it's the first amount of energy correction in the coulomb interaction which is the obvious part and this gives rise to additional contributions to what's called the hyperfine structure of the time expected so I just want to say more about that now alright so that's all I want to say about maybe I'll write up some more I'll turn to a new subject now which is that parity this puts the end to our study of rotations and all this work we've been doing in rotations we've been talking exclusively about proper rotations proper rotations are rotations that belong to the group n sub 3 and that just means that they are plus 1 it means that as operations on space they preserve the handiness of the coordinate system they don't flip the right hand or the left hand but I'm going to introduce now a certain operation so the geometrical setup I wanted to invoke now is the same one that we used earlier in talking about classical rotations that's to say we'll posit some inertial coordinate system x, y, z and we'll refer operations to the coordinate system so before we use rotations we're rotations about the origin of the coordinate system now what I like to do is introduce an operation I'll call p which in words we'll define we'll call the spatial inversion of operation and what it does is it takes a vector it just maps it in minus itself spatial inversion because you start here with the vector r you just go up to the opposite side and you just reflect it through the origin to write this as a matrix p is that matrix with minus 1 is in the diagonal and 0 is in the off diagonal in other words it's just minus the identity you can see that the determinant is equal to minus 1 it's also an orthogonal matrix so this is an example of an improper rotation so we're talking about improper rotations now now previously we were talking about proper rotations when we switched to quantum mechanics we were interested in the unitary operators in our quantum mechanical systems which implement the classical rotations so we can write this this way it's that there's some mapping that takes a classical rotation and converts it into a unitary operator reacting on our quantum system depends on the system what the operator is so let's assume we have some quantum mechanical system around what I'd like to do now is do something similar for the case of the spatial inversion operation which exists at the level of classical geometry spatial geometry to convert this into an operator which we'll call pi that acts on our moment space the problem is to find the operator pi this is a similar task what we did earlier when we were looking for what the operators U and R were although this is not a much simpler task because there's only one P and there are lots of Rs so what we did before in front of the U of R was we said the U of R should reproduce the multiplication law of the classical rotations we're going to use a similar criterion for finding the operator pi so let's review what are the properties of the spatial inversion operator at the level of geometry in three-dimensional space here are the principal ones that we want first of all if you take the operator P and square it the identity has as obvious from reflecting from the origin twice the next thing is that P commutes from any rotation any proper rotation and this is also obvious because P is just minus the identity matrix so an identity matrix commutes with everything it actually commutes with all operations not just rotations so these are the principal properties let's borrow those into quantum mechanics and so we're loading up with requirements for postulates on the operator requirements for pi well the first requirement is that it should be unitary and we make that requirement because we expect symmetry operations to preserve probabilities we did something similar with the rotational operators secondly let's require that the square of pi be equally identity because that's what happens for the spatial rotation operation and then thirdly let's require that the pi commutes with rotation operators because it does so at the classical level as well this is for OR and SO3 all proper rotations these are our three requirements on the parity operator pi now these three requirements don't uniquely determine pi but they narrow the possibilities down there considerably so depending on the system it becomes possible to find an explicit formula for pi let's begin with the simplest case which is a spinless particle in 3D in which the wave function of psi of r as we know we know that if we apply a rotation to this this gets mapped into psi of rotation inverse multiplying by r what do you suppose should be the definition of the action of parity on the wave function well it's logical that we should use P inverse here's the P inverse acting on a radius factor but P inverse is the same as P as minus identity so this becomes psi of minus r so that's a guess for the action of parity as well as this guess as a matter of fact for the rotations there's another way of doing this instead of looking at wave functions let's look at kets rotation operator U of r acting on a ket acting on a position operator what it does is it just rotates the position of the ket and so likewise it's a good guess that pi acting on a position of eigenket would take it into minus the eigenket of the spatially inverted position these two statements here on the acting on a position of eigenket are completely equivalent to the two above the line which refer to wave functions guess for a definition of pi let's see if it satisfies our requirements first of all is it unitary the answer is yes an operator is unitary if and only if it preserves all scalar products so let's just look at the norm of the wave function of a parity inverted wave function of psi of minus r this would be integrating over all space this is the absolute value of psi of minus r of 1 e squared just by doing a change of variable taking r of minus r this is equal to the integral dq of r of psi plus r of y squared and so the norm of the wave function is preserved I think this is clear anyway just to make sure what this does so it is indeed a unitary operator secondly is it squared equal to 1 the answer to that is yes obviously if we did this operation twice we could flip the psi twice we could do this operation twice thirdly does it commute with all rotations the answer to this is again yes let's just put it in different orders let's take psi of r as an initial wave function let's apply first of all a a rotation operator called u of r then this turns into psi of r inverse r and then let's apply parity by this provision of definition this becomes psi it just takes r that places it by a minus r so we get psi of minus r inverse r like this and if you do this in the reverse order psi of r of parity first this goes into psi of minus r and if you find the rotation next this goes into psi of minus r inverse r and so they agree and we see that this parity operator commutes with all rotations of course if an operator commutes with all rotations that's our definition of a scalar operator and we can say that one of these requirements of postulates to the parity operator is that it should be scalar that means in particular commutes with all rotations it commutes with infinitesimal rotations therefore it has to commute with the angular momentum of the system j whenever it is it's just another way of writing a property 3 we call this 3a and 3d if you want but in a particular case that we're talking about spinless particles from 3d the angular momentum is of course the orbital angular momentum and so it follows we've probably done that for this particular case of a spinless particle in 3d that it commutes with orbital angular momentum this is the commutation relation you can check directly from the definition of how that's a wave function if you want to it's a necessary consequence of the commutativity of rotations alright so this will be we'll take this as our definition of parity then for spinless particles the general strategy here is to study the properties of the parity operator and then we'll turn to the question of what happens with Hamiltonian's commutative parity when they commute with parity when they don't when they do so right now we'll be working on just the properties of parity next question is one about spin let's take it one step at a time let's talk about a case in which it's only the spin of the particle we're interested in and we want the spatial part so in this case our Hilbert space E is the span of just the spin eigen space of the spin space that's the animal like this where the animal's remind us that's not the problem so another question is what does the parity do to one of these space SM well we use the fact that the parity has to commute with the eigen of the system which is S that's the consequence of 3D up here so let's take a look at first of all the case of S and Z so if we apply S and Z to pi acting on SM this is the same thing as pi times S and Z acting on SM which is times pi acting on SM so you see pi acting on SM is actually an eigenstate of S and Z with eigenvalue M but there is only one state in this Hilbert space that has such an eigenvalue of SMC so this must be equal to must be proportional to SM or C is some proportionality factor but in other words all of the basis factors in the standard angle of basis in the spin space are eigenstates of parity with some eigenvalue which is calling C here well the fact that the parity operator squared is equal to 1 means that the eigenvalue must be equal to plus or minus 1 so this C value here becomes plus or minus 1 what is called C here which equals to plus or minus 1 like this well I'm running out of time I am out of time so I want to stop I have to stop let me just say that it's too bad to stop right here it's just in the middle of this argument the next step of this argument is to show that these eigenvalues of the parity operator for these basis states are actually independent of them and thus there is a single eigenvalue which is plus or minus 1 that applies to the entire spin space that's the next conclusion in fact it's exactly the same conclusion that we made earlier when talking about Hamiltonians that the energy eigenvalues are independent of that magnetic formula here we say pi is the scalar operator so what's eigenvalue equal to 10 in here