 to where we were last time. We worked out a general relationship for a rotation operator representing an axis angle form acting on an arbitrary vector u. There's three terms involving trigonometric functions, and then vector cross products and dot products and so on. In a special case in which an angle is very small, this simplifies and becomes just u, which is the identity term, plus a small correction, which is the angle theta times the axis crossed into the vector u. Now there's another approach to small angle rotations, which starts from just a rotation matrix as a model of matrix. If it says small angle rotation, the matrix R has to be close to the identity, so we write the correction term as epsilon times A. The epsilon here is just a reminder that this term is so small that if we wanted to, we could absorb the epsilon into the definition of A, and then we just have to think of A as a small matrix. But I split up the epsilon just as, really for psychological reasons, to remind you that that term is small. And then at the end of the hour last time, I showed that if you oppose the requirement that R be refoggable, R transpose R is identity, it follows that this matrix A is anti-symmetric. So the rule here is that the small correction matrix for a small angle rotation is an anti-symmetric matrix superimposed on the identity, of course, which is what holds for zero angles. All right, thank you. Excuse me? Should there be a minus sign to pretend as? Minus sign, excuse me, yes, this would be symmetric. This is minus A transpose indeed. The anti-symmetric matrices. Now, so let's talk about anti-symmetric matrices. These are three by three matrices, let's write them out, write them out. The S has zeroes on the diagonal because it's anti-symmetric. In the one-two position, let's put a coefficient, I'll call minus A3, and then the opposite side, we have to have plus A3. In the upper right, let's put what we call A2 and then minus A2 opposite that. In the remaining slide, minus A1 and then A1 here. Plus A1 to the other side. So this is, as you can see, is an arbitrary anti-symmetric matrix expressed in terms of three real parameters, A1, A2 and A3. Let's write this this way as A1 times the matrix which has the components zero zero zero zero zero minus one zero one zero plus A2 times A2 times the matrix, zero zero one zero zero zero minus one zero zero plus A3 times the matrix which is zero minus one zero, one zero zero and zero zero zero zero like this. And if I do this, then you see that these three matrices I've written here, a form of basis of three by three matrices that span the space of all possible a's and a's of matrices. And we give it some of these three pieces of chalk here. And that these lowercase a's are the expansion coefficients. Now let's write these three matrices, give them names. I'll call the first one J1 and the script J. The second one is J2 and the third one is called J3. When I write ten notes on a computer, I use a sense of response for these matrices because that's just a standard of matrices. Then the blackboard I'll write in the script J's here. But anyway, remember that these are two matrices. So this linear combination here, it's conventional to write it in a particular way. I should write it as A vector dotted into J vector. In other words, here A is a vector of numbers, three numbers. And J is a vector of, quote unquote, a vector of three by three matrices. This is similar to the notation A dot sigma that you're, which you're used to in quantum mechanics, or a sigma with power matrices. Here A is a vector of numbers and sigma is a quote unquote vector of two by two matrices. So it's the same type of notation. When you use this, you have to remember that the J is really a vector of matrices. Anyway, this is just notation. This shows that an arbitrary anti-symmetric matrix can be associated with a three vector, A, if you, quote unquote, dot an entry, is a vector of basis matrices. And it's a nice notation for anti-symmetric matrices. This also introduces for us these matrices script J, which I'll be telling you about. Actually, I intended to do this on another board so you can see if I'm covering it up. But if you can kind of hold your, hold your attention to that. Before I cover it up, let me mention, well, since I started here, maybe I'll finish this before. Let me just mention some facts about this. You can see these J matrices are constants. They just have ones and zeros and minus points in them. And in fact, let me list the properties, the properties of the J matrices here. There's really two properties. This one comes in two versions. The first one is that if you look at the components of one of these J matrices called a J i, and we take the J k component of the other, the answer is that it's a Levy-Chibbier symbol by epsilon i J k with, as it turns out, a minus sign in front. And to check that, you just need to look at the ones and zeros. There's 27 numbers, so they turn out to be this. It's just another way of writing an epsilon symbol, actually. There's a close variation on this property, which is the following. Maybe I'll derive it before I write it down. Derivation is this, is that if we take a J, let's take a dotted into J. This is, of course, a anti-symmetric matrix. And let's multiply that out into an arbitrary vector u, the matrix multiplication times a vector. Let's take the i component of this and work it out. This is the same thing as the matrix a dotted into J, the ij component of that times uj, for I'm using the summation convention. That's the same thing as a sub k times J sub k, the ij component of that times u sub j. Just by using what we mean by a dot j, it's this. But then, except that the Js are related to the epsilon symbol like this, this becomes equal to minus a sub k times epsilon kij times uj. And now let me swap the k and the i, swap two indices in the epsilon, which gives a change in sign. So this becomes the plus sign, epsilon ikj times a sub k times u sub j. If that in turn is the same thing as the cross product a cross u, the i component of that. So the i component at the top is equal to the i component at the bottom, so the two vectors are equal. And this gives me property 1d, which I want to write out, which is that matrix and symmetric matrix a dotted into j by matrix multiplication after that vector u is the same thing as a cross u. And this is useful because it gives a matrix notation for the cross product, which it turns out is quite useful in the application. So these are properties 1a and 1b. Now, there's a second set of properties called 2a and 2d. These are commutation relations. If we compute the commutation relations of two of these j matrices, ji and jj, which you do just by doing the matrix multiplication, you'll find this is epsilon ijk times j sub k. This, of course, reminds you of the commutation relations of the Pauli matrices. And see, we're getting similar linear combinations here and here, although these j matrices are coming out of classical rotations and by themselves don't have anything to do with quantum mechanics. In fact, they really only have to do with Euclidean geometry in three-dimensional space. In that sense, they don't have anything to do with classical mechanics either. It's just cheap geometry, as well. Anyway, this is the commutation relations, which I'll let you verify by multiplying the matrices. Now, there's a variation on this, which I'll call 2d, and it has this for it. If we take a dotted into j, this is one particular anti-symmetric matrix parameterized by the three-vector a, and then take another anti-symmetric matrix parameterized by a different three-vector column b. Well, it's easy to show that the commutator of two anti-symmetric matrices is anti-symmetric. So the result has to be expressible in terms of the dot product that's summing three-vector into the j matrices. And the question is, what is that three-vector? The answer is it is the cross product of a with b. It's a cross b, not j. So these are the main properties of the j matrices for four years. All right, so this is a way of talking about j matrices. By the way, if we go back to this earlier expression, we have for a... I'll call this an infinitesimal rotation. The rotation is an infinitesimal. The angle is, however, it's a small angle. That's what I mean by an infinitesimal rotation. The rotation itself is an identity plus a small direction. Let's take this, and now let's use this property number 1b here, which allows us to re-express the cross product in terms of these j matrices. So this says that this line, this box here is equal to, first of all, there's the u, which I'll copy. And then for this cross product, we divide it this way. It's theta times n hat dotted inches for j. That's a matrix. Multiply u. This, of course, is the same thing as the identity plus theta times n hat dotted into j. The whole thing, multiply u. And the result is that the near-identity rotation matrix can be a one-off, down-ended bar. It can be written. These are also trying to reduce us. It can be written this way is that r of n hat comma theta is equal to the identity plus theta times n hat dotted into square root of j. And there's extra terms. This is only the leading term in theta. And this applies when theta is small. If you neglect the higher order terms, it's an approximation with theta is small. So this is another way of writing near-identity rotation matrix now in terms of these three matrices. All right. Now, next, let me, of course, I'm going to cover this up. Next, let me make some comments about the commutativity of the rotation matrices. Let's say we've got two rotation matrices, r1 and r2, which are related, which are defined in terms of different axes and different angles. So the first one has an axis n1 theta 1 and the second one has an axis n hat 2 theta 2. These are just two arbitrary rotations now. I just want to make a very simple point at this point, which is that r1, r2 is not equal to r2, r1 in general. In general. And to believe this, all you have to do is just try an example. So if you, for example, multiply your rotation about the x-axis and then one about the y-axis by pi over 2 is something simple and then do it in the other order. I'll actually do this for yourselves. In your spare time, your co-op is spare time. And if you do, you'll see the answers are not the same. So this is a way of saying the rotation is not the same in general. In the language of root theory, we say that s of 3 is not a brilliant root and this terminology is used pretty much these days. So I'll just tell you what it means. Not a brilliant would imply the root just means non-inheritance. That's all it means. So the statement is the rotation root, which is the, in this case, the root root. These are the proper rotations. It's essentially a non-inheritance root. It just means that general rotations don't include it. If I may go back to this expression down here at the bottom of the board and we're live here for a small angle rotation it's expressed in terms of the axis of the rotation dotted into these J matrices multiplied by the angle of the rotation. You can see this is the first term of a Taylor series expansion of the rotation matrix in terms of powers of the angle. And there's a question about what the higher order terms are. As it turns out, it's actually pretty easy to get the higher order terms. And I'll now show you how we do this. Because this result of knowledge in the first word of term, it turns out that's enough to get all the other terms without too much trouble. So here's how this works. We'll lock this up and we'll get this involved. Here's what we do. We take a rotation in axis angle form and we look for differential equations in terms of the angle theta. Actually before I proceed, let me do go back to this earlier remark about the commutativity of rotations. Because there is an exception to this. There's an exception when rotations do commute. And that's when the axes are the same. Now this is geometrically clear. If I have an axis in that like this, and then you do a rotation by the right-handed rule, it's like the rotations. It's pretty clear that the rotations about a fixed axis just add, the angles add. So in other words, you have r of in hat comma theta one at the same time, r of in hat comma theta two. Or the point is these are the same axes. This is the same thing as r of the axis in hat theta one plus theta two. The angles just add. And this is the same thing if you do this in the other order. You won't play another order. So rotation matrices do commute if the axes are the same. In general, they don't commute because the two axes in general are not the same. Now using that fact, I can work out a useful expression for the derivative of the rotation matrix of axis angle four. It's actually pretty easy. So write this using the definition of derivative. So limit is epsilon goes to zero of r of in hat comma theta plus epsilon minus r of in hat comma theta divided by epsilon. The first term in this numerator here is the same thing as r of in hat comma epsilon multiplied times r of in hat comma theta because that's the same axis. We have about the same axis so the angles just add. And so you see the entire numerator has a factor of r of in hat comma theta which I can take out to the right. So this thing turns into the limit as epsilon goes to zero of r of in hat comma epsilon minus identity divided by epsilon times r of in hat comma theta. And now, in this limit, you see we have a small angle and a small angle rotation. And we know what that is. Now we're going to go to the bottom of the board it's identity plus the angle times in hat dot j. Now here I call it the angle theta and here I'm calling it epsilon but the result is in fact off the identity divided by epsilon what's left over is just the in hat dot j. So this whole thing is in hat dot of j times r of in hat comma theta. In hat dot j is independent of theta. So this is a differential equation which is easy to solve. The solution is this. This is r of in hat comma theta. There's an initial condition by the way which is r of in hat comma zero as if the identity of the angle is zero to the rotation matrix is the identity. And so this initial condition the solution is that it's the exponential of theta times in hat dotted into script j. This is an important result it's the exponential form of rotation, these are probably proper rotations now. Proper rotations in axis angle form. And so if we expand this out in a Taylor series we get the identity plus theta times in hat dotted into script j plus theta squared over two factorial times in hat dotted into script j squared plus dot by dot. It's easy to write down the general term and this confirms what I said in every minute of the go that this earlier result we got can be extended up we can extend this into the Taylor series and write down the general term it's just an exponential series and this is what it is here. So this is an important result. All right, that's the exponential form for matrices, rotation matrices in axis angle form. Next slide please. Next slide please. Another result and the story on this begins with the cross product. Let's take the cross product of two vectors, I'll call them a and u like this. There is a fact about proper rotations that the cross product transforms as a vector. What that means in present context is that if you take the cross product of these two vectors and then rotate the result like this by some rotation this is the proper rotation now. In fact I'll almost exclusively be talking about proper rotations quite a bit of time. We'll come back to improper ones when we want to discuss parity but for proper rotation we rotate the cross product and the answer it turns out is the same as if you rotate the vectors first and then take the cross product second. So this is the same thing as r a that vector cross with the vector r u. Now I'm not going to prove this formula rather I'll leave it as an exercise for you but when you do this exercise what you'll find is the answer is only true if the rotation r is proper if the rotation r is improper you get a minus sign on the other side. If you wanted to handle both cases what you would do is you would insert a sign here which is the determinant of r which is of course equal to plus or minus one with a plus sign being the proper ones and the minus one being improper ones. One of the ways it's saying this is that if you take the cross product of two vectors what you get is a pseudo the difference between vectors and pseudo vectors is how they transform under parities an example of an improper rotation to change sign under improper rotations whereas under ordinary rotations they don't. Now I only want to talk about proper rotations so I'll leave that sign out and we just have a plus sign on the right hand side. Alright Now I want to use this to derive a useful result and I'm going to use this notation which I got covered up unfortunately the notation for the cross product which is property 1B here that A cross with U is A dot, J over over 4 and U A dot, J is the matrix now it's a matrix multiplication and so let's use that notation to rewrite this so on the left hand side what we've got is we're rotating A cross U is the same thing as A dotted into J multiplied by U A dotted into J is the matrix so now you can see this is a product of two matrices multiply by U and on the right hand side we can write this as R A which is a vector dotted into the matrix excuse me vector of matrices 3 matrices J like this multiply R A like this so this is again a matrix multiplication this matrix runs down one you can do this in a sense as a kind of a commutation rule because if I've got R multiply times this anti-symmetric matrix and I want to pull the R through it what I have to do is replace the A there by R A then the R appears into the side and it drops part of the solve for the A anyway in fact this is where this is going it's a kind of a commutation relation or a conjugation relation involving anti-symmetric matrices to bring this up more clearly let's replace let's write U equal to R inverse of another vector let's call V because if we do this then this equation becomes R times A dotted into J times R inverse multiplying vector V it's equal to on the right hand side R A which is a vector dotted into J which gives us a matrix which multiplies R U you see R U is running backwards and U is equal to V which is V over the right hand side like this V is arbitrary so we'll strike it in both sides and to summarize this we get an equation that says that R times A dotted into J times R inverse is equal to R A vector dotted into J we unbox that because that's a simple formula we'll even give this a name called the adjoin formula not an official name it's just my name but this is in fact related to the adjoin representation of the group which is why I call it that I won't take you into that but it is useful to have a name for this there are several ways of viewing this result A dotted into J is an architecture of anti-symmetric matrix which is a thing parameterized in terms of this three vector A based on matrices J now it's easy to show that if you take an anti-symmetric matrix and you conjugate by a rotation you get another anti-symmetric matrix and so that therefore is also expressible in terms of some three vector dotted into J and the question is what is the new three vector and the answer is just the rotated version of the old three vector so contributing anti-symmetric matrices is equivalent to rotating the corresponding that's the meaning of this equation that leads to another useful result let's let A here be equal to Theta times Inhat Axis times the angle so to write this out we just have R times Theta Inhat dotted into J times R inverse for the supremacists here this equals to Theta times R Inhat dotted into J now a lot of way to exponentiate both sides on the left hand side we've got an expression that looks like this it's either the A times B times A minus 1 it's that exponential you had something like this in the homework problem a couple of weeks ago this is the same thing as A times B A inverse it's the same as conjugating you're conjugating exponential it's the same thing as exponentiating the conjugated matrix the power series each term is just a power conjugating the powers like conjugating each factor so this left hand side is the same thing as R times E to the Theta Inhat dotted into J times R inverse I think it will avoid some confusion here instead of writing R I call this R zero let me put a zero subscript from this R that appears here just to distinguish it from another rotation matrix which is coming up R zero is in the rotation let's just put a zero on it so this is all following from this exaggerated formula so what you can see here is in the middle you've got a rotation matrix and that's the axis angle form that's being conjugated by a rotation matrix R zero on the right hand side is another rotation matrix the axis is rotated and so to summarize this the formula can be written this way is that R zero times R of Inhat comma Theta times other zero inverse is equal to R of R zero Inhat comma Theta let me talk about that result which is another important result this is essentially an exponentiated version of the adjoint formula and so I'll also call it so it's related in such a simple way to the adjoint formula let's call this an exponentiated version of it but let's think about what it means it says if you take a rotation in axis angle form and you conjugate by second rotation of fixed rotation R zero you get a new rotation and the question is what happens to the axis in the angle and the first thing it says an angle doesn't change it's a new rotation of the same angle but the axis has been rotated by the rotation into use of the conjugation now one of the ways of saying this in words which helps you remember it is to say that the axis of a rotation transforms as a vector under rotations and the angle of rotation transforms as a scalar it doesn't change this is all logical it's a simple meaning but this is one way of looking at it now alright so that's the exponentiated version of the adjoint I'm going to apply it in just a moment in fact I'm going to apply it next so far we've been parameterizing rotations in axis angle form you can see there's really three parameters here there's certainly one parameter in axis angle but a unit vector is equivalent to a polynomial on a unit sphere and that requires two angles so if you wonder in terms of angles you see it's all got three angles applied in this parameterization so the basic rule is that the parameterized rotation requires three parameters the set of rotations is a three dimensional space this is one of the common parameterizations of rotations another common parameterization and I want to tell you about that now so the basic idea of boiler angles works like this let's start with our coordinate system x, y, z are an aerosol frame like this this has a unit vector so-called e hat i i equals one, two, three means x, y, and z let's suppose we've got some fixed rotation of r that we're just getting the point here is to parameterize it let's give it a fixed rotation of r let's let r act on these old base perspectives and give us ourselves new ones which we call prime products so this means there's a new axis, a set of axes and we're trying to sketch the meaningless best I can let's say this is x prime, this is y prime this is z prime this gives us two frames an old frame and a rotate frame now I'm going to take it as geometrically obvious let's specify the orientations of all three axes of the rotated frame that's equivalent to specifying rotation r so to parameterize the rotation r we need to parameterize the orientations of the three prime axes well there's three of them let's simplify just starting with z axes so let me draw this picture again except I'll I'll permit the x prime and y prime axes here here's the old axes x, y, and z here's the z prime axes like this now by our rotation matrix r that's all the old axes in the new ones so in particular it maps the z axis of the z prime axes let's parameterize the z prime axes by the spherical coordinates we want the orientation of the z prime axes let's parameterize it by as usual spherical angles let's call it polar angle beta polar angle alpha like this so those are two angles giving us the orientation of the z prime axes now allow me to define another rotation which is not the same as our fixed one we started with, I'll call it r1 but it's one subscript and I just distinguish it from the old one so r1 is defined this way it's equal to r of z hat prime alpha times r of y hat prime beta and the reason this is interesting is because it turns out that r1 also maps the old z axis of the new z prime axes and to see that all we have to do is just draw pictures so if I take the old axes x, y, and z like this here's the original z prime grid vector sticking up like that of the z prime axes now let's begin by rotation about the y axis by angle beta, this is the right hand rule you see around the y axis you do the rotation on the right first and then you move from right to left and you order the operators to apply so this is going to take this z vector and swing it out of the xz plane by an angle of beta like this okay then we follow that by rotation about the z axis by an angle alpha let me look this up so we're going to do this by an angle alpha and that's going to take this vector to the column as you can see and bring the vector over to a new orientation like this in which the angle alpha is the usual azimuthal angle of the new vector and the angle beta is this one so that coincides with the usual spherical angles of the new z prime axes so by drawing a picture like this what we see is at the rotation r1 also maps the z axis the z axis and the z prime axes it does the same thing as our original matrix r for given rotation r now does this imply that these two rotations are the same oh no because both of them have the same effect on the z axis they put the z prime axes in the right orientation but nothing says that they're going to get the y prime axes right and generally don't so r1 and r differ from one another because they don't do the same thing with the x and y prime axes however if the z prime axes is right then the x and y prime axes can only be wrong by some rotation about the z prime direction so if we take this r1 and follow it by a rotation about the z prime direction by some angle is called a gamma if you choose gamma right the y prime axes are right so there must be some angle gamma such that r is equal to r of z hat prime gamma times r1 if you write out r1 this is the same thing as r of z hat prime gamma times r of z hat of alpha times r of y comma beta and this gives us the well-arranged parameterization r will now write as r alpha theta gamma z equal to this r is starting with and you can even see what the range is on these three angles have to be. First of all alpha and beta are easy to interpret because they're the polar angles of the z prime axes so the alpha is the azimuthal angle and it has to be in the range of 0 and 2 pi and beta is the polar angle so it has to be in the range of 0 and pi to see what it is to make the south pole but when the gamma is the rotation that gets the x and y prime axes right inside their x prime y prime plane the general requires an angle of the range of 0 and 2 pi and so these are the ranges on the polar angles such that if alpha and pi and gamma are in those ranges you cover all possible rotations well the way that this is written out is that this is all right but the way this is written out it's not the most convenient form for the polar angles because the rotation the given rotation is now being expressed as a product of three rotations but you can see it's a mixture of old and new axes there's two old axes z and y there's the new axes, the prime axes there and it's more convenient to write it purely in terms of the old axes yes how do you know you don't but the point is we want to be able to parameterize rotation in terms of three angles so if you gave me an arbitrary rotation I could actually work this out and find out what extra angle needed to be opposed to how you were about to do this I think what you're driving at is that the geometrical interpretation of gamma is not as clear as it is about the beta and that's certainly true it's related to the line of nodes between the planes think about that but right now let's just say there is if there exists an angle of gamma now so we'll fix this up in the following way let's take this first expression here so here's our R at alpha beta gamma and it's equal to R of z hat gamma times R1 I'll write it this way R1 times R1 inverse times R of z hat prime gamma times R1 write it like that the reason I do that is because now the last three factors are a conjugation of a rotation by a second rotation of R1 well here's our nice formula of the above which we've just worked out I call it R0 here and I'm calling it R1 there and it's actually R0 there corresponds to R1 inverse here as you can see there's an inverse on the left hand side but the result is but now it's got to be the inverse rotated R1 inverse applied to z hat prime that's the axis that's what this is equal to well here's what R1 did to z so R1 inverse applied to z hat prime takes us back to the original unrotated axis z this is the same thing as our z hat gamma so you see we've gone from the primed axis to the un-primed axis by this conjugation and so the final result is in fact what I've done is I've taken this same here and moved it over to the other side and it's sort of doing the drop to prime so the result is this is an R of alpha, beta, gamma which is a R of z hat, comma, alpha R of y hat, comma, beta times R of z hat, comma, gamma the product of three rotations that look like this and that's the Euler angle privatization this is the most convenient form for the Euler angle privatization of the privatization axis now this is sometimes called the ZYZ convention for Euler angles and in addition I'm using the active point of view on rotations which as I mentioned last time is following throughout this whole this whole subject you've probably seen Euler angles in your course in classical mechanics and they are that you they almost always use a ZXZ convention and they also use the passive point of view so there's details are different but the basic idea is the same it's convenient to use the ZYZ convention in one mechanics because I actually think we started doing this because the conventions we have for which of the which of the holy matrices are real and which ones are imaginary and why are these imaginary reasons for that alright so that's the story of Euler angles alright now one last topic in classical rotations before we move on to quantum rotations in quantum mechanics and this is to examine the question of the sensitivity of rotations in more detail let's write down two rotations let's say r1 and r2 which have different axes and angles in one theta 1 and in two theta 2 and if we write these in exponential form we can write the first one as e to the a1 and even second one as e to the a2 or a1 and a2 or any symmetric matrices a1 is equal to 1 times n hat 1 dotted in the script J and a2 is the theta 2 times n hat 2 dotted in the script J now I mentioned earlier that r1 times r2 is not in general the same thing as r2 times r1 let me introduce a matrix which is r1 times r2 times r1 inverse times r2 inverse and call this matrix C just to give it an A first thing to notice is that if r1 and r2 do commute then I can bring the r1 inverse past the r2 here and it cancels out with r1 and likewise what's left over is r2 times r2 inverse and that cancels the result is that C is equal to the identity if r1 and r2 is equal to r2 and r1 it can happen to commute and if they don't commute then this is in some sense a measure of the amount by which they don't commute now I need to tell you this I hesitate to tell you this in general literature the C is called a commutator the reason I hesitate is because it's not the commutator that you're used to in quantum mechanics it is however related to the commutator that you're used to and I'll show you now how that's solved it's particularly interesting to look at this this matrix C in a special case in which the angles theta one and theta two are small so let's do a Taylor series expansion of these four factors that empowers the angles what happens to C when the angles are small maybe before I do that whom you mentioned that in the book in the Socorite book he does an example of how he considers the first rotation of the x-axis about some angle and then about the y-axis by different angle and then reverses the order goes back and x and back and y and what he shows is that the final result is the rotation of the z-axis well that's just what this is doing going one direction or another this is a slight generalization what he does in the book all right, anyway if we expand these out into series then for r one we've got the identity plus a one plus one half a one squared plus dot dot dot that's the exponential series and then for r two it's identity plus a two plus one half a two squared plus dot dot dot and then for r one inverse it's identity minus a one plus one half a one squared plus dot dot dot and then for r two inverse it's identity minus a two plus one half a two squared plus dot dot dot writing out the exponential series which I carry out in the second order of the backward order now you multiply these four things together obviously what you get in both order is the identity and then in the first order what you get is just the sum of the first four order terms here which is a one plus a two which is zero so we just write this as one plus zero this c thing vanishes in the first order when the angles are small if you want to get a the next correction in other words if you're interested in how these two matrices don't come in with each other you've got to see how it is to see the difference from the identity and if you want to do that you have to go to the second order that's why I expanded these series here to the second order so now we go through and connect up all the second order terms here's some algebra in that there's six second order terms I'll let you do it what you find is that the second order when the slope clears is actually the commutator of the matrices a1 and a2 in the usual sense of order mechanics and then there's higher order terms this is the second order in fact plugging in these specific values of a1 and a2 in terms of the axes and angles this becomes the identity plus theta one times theta two here impact one dotted into j with the commutator of impact two dotted into j plus higher order terms now we'll go back to the properties of j matrices which are summarized over here what should the property 2 be the commutator of two these anisometric matrices is expressed in terms of the cross product of course of all the vectors so the result is in power series this matrix c is the identity plus theta one times theta two times impact one cross with impact two dotted into j plus higher order terms that's something I'll just post that result because we'll need it in a little while okay so that proved the efficiency of this process in little big pieces of chalk okay so that's all I want to say about classical rotations it's really just the this isn't even classical mechanics it's just the geometry of three-dimensional space it's three-dimensional Euclidean geometry it's really all the same now however I want to turn to the question of rotation operators in quantum mechanics let's begin by supposing we have a quantum system and so there's some altered space for the system and in fact at the beginning we're not going to be very specific about what the system is because there's a lot of conclusions that can be drawn without having to do that so we're not going to say necessarily whether it's a spin system or whether there's orbital degrees of freedom or multi-particle or going through this thing or any of those things will actually won't matter some quantum system and what we like to do is to make a reasonable definition of what we'll call rotation operators which we'll think of as being parameterized by the classical rotations r so u is an operator that acts on this orbital space the fact that it's parameterized by the classical the classical rotation r will look this way it's a given a classical rotation even associated with a corresponding operator u of r and in some sense what u of r will do to make our quantum system before I go on we need to think a little bit about physically about what it means to rotate a quantum system if you have like the 2p state of a hydrogen atom that's a certain wave function and you want to ask what does it mean to rotate it well you can't go in with a wrench and turn an electron in a hydrogen atom so in that sense what does it mean to rotate it but I need to point out or remind you that in spite of the language we use all the time about we say the wave function of the electron the truth is what the wave function represents is the statistical results of measurements on an ensemble of identity-prepared systems that describes the properties of an ensemble and not so much of a single system the ensemble itself is prepared by some preparation apparatus which for simplicity may assume can also come by density operators but let's say you have apparatus that pairs with your state so one way to define the rotated state is to say that it's the state produced by the rotated preparation apparatus and that's certainly clear because you can always rotate your Stern-Gerlach apparatus for example alright so that's one point of view on rotation operators in quantum mechanics another point of view is that rotation operators in quantum mechanics oftentimes arise as a result of interactions or as the time evolution of specific systems this is most notably true in the case of spin systems and magnetic fields which give rise to rotations and spins we'll look at that in detail after a while but in any case this gives you at least some idea about what rotation operators in quantum mechanics mean from a physical standpoint right now we want to address the general question about how can we take a classical rotation and in general reduce an operator that represents those rotations so to do this we're going to make some postulates of demands on what this operators U of R should satisfy these are reasonable demands the first one is is that the operator U of R is unitary this follows from the requirement that symmetry operation should preserve probabilities if you rotate a system you don't expect particles to disappear so it should be unitary the second simple requirement is that the unitary operator corresponding to the identity should be the identity operator in quantum mechanics and the third requirement which is less trivial is that if we take the unitary operator corresponding to a product of rotations it should correspond to the product of the unitary operator like this and these are the requirements we'll impose now if we find a set of unitary operators that satisfy these three postulates then what we say is if we have a representation as the magic word here is used frequently in quantum mechanics more exactly this mapping that takes you from the classical rotations to the quantum rotations is a representation of classical rotations by means of the unitary operators which act on the over space that means that these unitary operators reproduce the multiplication law and that's the idea as it turns out these three postulates I've written down that are in general too strong and we can't actually meet them we'll see this comes up it has to do with spin limit half systems but you can almost meet these requirements and when you're done you'll learn some extra things about spin limit half systems this is an extra minus one phase that comes when you rotate an electron by 360 degrees we'll come back to that in just a little while anyway, these are the postulates and now the problem is to work on the consequences we'll stop here