 which is the way I should have done it, and put it on the web. It isn't very big pages, like five pages of pages. Today, what I'd like to do is to give you another problem involving magnetic fields. So, pardon the pluses of magnetic field. This is the case of the magnetic monocle, which was a case studied by Dr. Mack in 1930s and led to surprising results. Magnetic monocle is a hypothetical particle one's never been observed, but if it didn't exist, it was a particle that would have a magnetic field. It was just like the electric field of an elementary charge particle, like Coulomb's law. So, if I call mu the strength of the magnetic particle, it can have a magnetic field that would look like this. I'll never ask where it comes from. This is placing the monocle, the magnetic monocle, the origin of the system of convergence. So, the magnetic field lines, the vectors, excuse me, are coming out of the origin like this, if it were there, the charge particle. If you take the divergence of this magnetic field, what you get is a John Zero result, 4, 5, 8 times the direct delta function of the origin. Zero everywhere except at the location of the monocle. But what this means is that the usual natural equation of L dot B and the zero is not satisfied anymore. It's satisfied everywhere except, effectively, an infinitesimal region around the particle, e.g. there it's not. Anyway, that's the basics of the magnetic monocle. Now, magnetic monocles are never going to be observed, but they're dearly loved by theorists. And there's certainly objects that theorists take seriously, and who knows what they're going to observe someday. In the meantime, there's been active searches, experimental searches, to look for magnetic monocles. And in order to do that, you need to know, where do you most likely define them. So, studying monocles is actually quite relevant to physics. In any case, in any case, if a monocle existed, this would be the magnetic field. Now, I'd like to study that they are taking a look at anyway, the quantum mechanics of, let's say an electron moving in the field of the magnetic monocle. A little sort of monocle is infinitely massive, so that it's like a magnetic version of the central force problem you see. Now, of course, in order to incorporate the magnetic field into the Schrodinger equation, we need a vector potential. So a vector potential is going to be a vector potential, but a vector potential a, such that e is equal to del cross a. The usual parents and calculus tell you that if you have a field whose divergence is equal to zero, then it can be written as the curl of a, and the potential, a vector potential in this case. That means we have to exclude the origin there, because that's where we don't have to ignore it from zero. And so a can only be defined in the original way for the origin, and kind of cut that point out. This was not surprising to expect the potential of other infinity in the origin anyway to be similar to that of hardware. All right, we'll do this in the spherical coordinates, R, theta, and phi, and I'll just summarize the results. If you write out the curl in the spherical coordinates, and you put in a given form for the e to the solution of its radial component, it's quite straightforward to solve it for a vector potential that satisfies this equation. The answer is not unique, because the answer for a vector potential is never unique, and always subject it to a gauge conservation. But there's one answer that comes out, it's not very hard to do from looking at the formula for a curl in the spherical coordinates. And it's this. And that many monocles can be known times one minus cosine theta divided by r sine theta times phi hat. This is a vector potential which is purely in phi direction. And you can check for yourself. Not only that it's easy to derive this, but also to make a curl of it, but also to measure the desired magnetic field. Notice by the way the magnetic vector potential does not have the same symmetry as the magnetic field. The magnetic field is a symmetric under all rotations, because it's obviously a rotational invariant, whereas the vector potential is only symmetric under rotations by the z axis. That's why it's in the phi direction. But anyway, it works. All right. Now, there is a problem, however, with the vector potential, which is the sine theta in the denominator, because that goes to zero when theta is equal to zero or pi. So if theta equals zero or pi, our places where we need to look at whether the vector potential is positive or singular, zero or pi, of course, is zero if theta equals zero, is a positive z axis. And if theta equals pi, it's not a line of negative z axis. So we're talking about singularities on the z axis. Well, if theta is close to zero, the denominator is also close to zero. But the numerator is also close to zero, because one minus cosine theta goes to zero in the numerator. In fact, goes to zero like theta squared where the denominator only goes to zero like theta. The result of this is that a actually has a well-defined limit on the positive z axis, and it's actually smooth there, despite appearances. However, on the negative z axis, where cosine theta is minus one, then the numerator becomes plus two, divided by sine theta, which is small, goes to zero. So this one diverges on the negative z axis. And so we'll indicate that by putting out a weekly line and going down the negative z axis. This is the line from which the vector potential is singular. So it's very well named half line. And this line is called the strain of the monopole. It really shouldn't be called the strain of the monopole. It's really the strain of the vector potential. The monopole field is perfectly well behaved with the negative z axis. It's because of the limitations in symmetry. But with the negative potential, you have this singularity in the negative z axis. Now, by doing a gauge transformation, it turns out we can move the strain around and put it in different directions. Just to give you an example, let's notice that if you compute the gradient and it has an interval angle of five, we'll do it in spirit before this. Look at the form of the gradient. You'll find this is equal to one over r sine theta times phi hat. And so if I add the multiples of the gradient of five, it can affect this one germ of your monopole by being over r sine theta. One germ is really an exact gradient. So let's consider a gauge transformation in which we'll subtract and subtract twice the u times the gradient of phi of this earlier vector potential. We'll change that one there into a minus one. So this gives us a new vector potential a, which is overall minus one plus cosine theta divided by times u divided by r sine theta times phi hat. So it's a second vector potential related to the first one by gauge transformation. Now the reason I set this up this way is because one plus cosine theta goes to zero on the negative z-axis where cosine theta was the minus one. But on the positive z-axis where one plus cosine theta goes to two, we're divided two by a small number and it diverges there. The result is this vector potential, from x, y, and z, has a singularity of the positive z-axis which I'll indicate by a square root of minus, the strain in this case. So in other words, by doing a gauge transformation we move the strain from the negative to the positive z-axis. Now, let's give some names to these two vector potentials. This vector potential over here is well-behaved in the northern hemisphere, the only place that's really bad is in the negative z-axis. But in particular it's well-behaved in the northern hemisphere where it goes bad when you get down towards the south pole. So let's have this again subscript on this to call this north regular gauge, meaning that it's regular and well-behaved in the northern hemisphere. Actually, you can continue down into the southern hemisphere to go below the south pole. Likewise, this gauge is well-behaved in the southern hemisphere and only diverges when you get up to the north pole. So let's call this AS standing for south regular gauge. These are these two gauges. And then the gauge transformation looks like this, is that A-nord is equal to A-south plus 2 mu times the gradient of phi. This is the explicit gauge transformation for the negative edge. Now, the question that arises is whether it's even possible because these strings are appearing where the singularities, which we know like, a question arises whether it's possible to do a gauge transformation to give it a string all together. In other words, is there a vector potential that's well-behaved and smooth everywhere that satisfies this situation everywhere? And the answer is no. You can prove it this way. Prove it by contradiction. Let's suppose there is a smooth, well-behaved vector potential that gives the monopole field everywhere in all directions. Now, of course, the origin itself, the singular, we know that. We're not having this small sphere around the origin out. We'll just talk about it every day else. It's way from origin zero. Is it possible to have a vector potential in that region, which is well-behaved everywhere? The answer is no, because if we take our monopole here and let's construct a sphere of some radius around it, and let's cut off an ink cap on the sphere much broad like this, there's an ink cap of some radius. And let's say we integrate the vector potential around this loop here. So the idea here is to prove this by contradiction. We will assume that there is a smooth vector potential in case v equals del cross a. I'll show you that. It leads to a contradiction. So we integrate this sphere of vector potential around this closed loop here, but that's the work by Stokes' term. It's equal to the integral magnet field in the surface of E dot dA. Here's the magnet field sticking out everywhere. Let's say the surface has some radius. It doesn't matter what the radius is. All the radius capital R. Now, what we want to do is to allow... So you may think about this in the limit of where you started out with a circle that surrounds this flat disk. When you take your finger and you press the disk in and you make a flow in, you create a bulge like that with a circle. And then you shrink the circle down like this much and close off the sphere. That's what we're ending up with here. As we allow this hole in the sphere to shrink down smaller and smaller, it's clear that the magnetic flux on the right-hand side approaches a total magnetic flux of the monopole going up to the sphere. So in the limit that the circle goes to zero, if I take the circle going to zero, then the right-hand side turns into port pi U, which is the net magnetic flux of this monopole here, by a constant law. On the other hand, the A dot B L must go to zero because the length of the curve is going to zero and we're assuming that A is smooth. So you've got contradictions that's clearly wrong. As long as the strength of the monopole is down to zero, this is a clearly wrong. And so the assumption that there exists a smooth A is wrong. And therefore, we must live with the presence of these monopole strings or singularities of some type. Now, this means that we need to work with patches if we want to deal with smooth functions. We can deal with two patches here, one that covers the northern hemisphere using the north regular gauge and one that covers the southern hemisphere using south regular gauge. Let's allow these two, let's extend the north regular gauge a little bit over the equator down into the southern hemisphere. When the south regular gauge is a little bit over the equator we can reach around the equator. I'll just sketch this story like this and say here's a sphere of some radius. Here's the equator going through here. Let's say the north, so here's the north pole, here's the south pole. So the north regular gauge will allow to extend like this. We could actually push it all the way down with the south pole, but we don't need to do that. Let's just push a ruler beyond the equator. Likewise, the south regular gauge let's push it around the equator here like this. This is the region of definition of north regular gauge and this is the region of definition of south regular gauge. If we do that, then there's a strip around the equator in which both gauges are defined. In that strip one can do a gauge transformation to go from one to the other. Each gauge is smooth in its respective region. This is covering the manifold of the patches. When we turn to quantum mechanics we know that when you change the gauge transformation you also need to change the wave function. So the wave function for this electron is actually going to be two of them. There's going to be one in the north region and one in the south region. Let's call these sine, more than the sine, south and these are related by gauge transformation. The only thing that's hard to remember is the sine, so I worked it out. And this is E to the... even the... This is an electron with a charge of E is killing this light C. So you have to take that into account. But you look for the formula for the gauge transformation and it involves a negative charge part of this involved minus E over h bar C times the gauge scalar. But the gauge scalar is right here. It's 2 mu times 5. So multiply that as 2 mu times 5. With 5 is the guess of the equation. And this is what applies. This gauge transformation applies in the equatorial script where the the two patches overlap. So in particular, this is the exponential of a constant times the guess of the equation. Now wave functions have to be smooth. But you didn't know these two patches. Why is that? Because in the first place there are solutions to the Schrodinger equation of those two patches. And they get the effect of potential of the smooth of those two patches. And so you get smooth solutions. You have to or else you won't satisfy the Schrodinger equation. Also, wave functions are simple value. And the reason for that is is that they're the expansion of the state, the upon state in a position eigen, the basis of the position eigencast. So you have two expansion coefficients that you need. And that's depending on the gauge. And why is that? It's because when you do a gauge transformation it's equivalent of changing the phases of the position eigencast. However, that does not change the fact that there's a unique expansion in terms of a given case convention for the basis states of the unique expansion. The result of this is that both side north and side south must be smooth everywhere, from particular area to that region. And therefore, this function in the function of five which is periodic in those bounds or in two parts. And so this function can be smooth only if the coefficient is an integer. And so what we find is that the consistency of quantum mechanics requires that this product here which is twice the u divided by h, y, c must be consistent even again. Now this can be written in various ways and this is what adds to the inclusion of this analysis. Now this can be written in various ways. This product of the electric charge times the magnetic charge is an integer equal to h, y, c divided by two times an integer. That's one way of writing it. Another way of writing it is to say the electric charge is h, y, c over two mu times an integer. And this is the one that Dirac particularly calls attention to. Because what this says is is that if there exists a monopole anywhere in the universe even just one, the charge mu then quantum mechanics requires that the charge known with charged particles must be quantized in the integer multiple of this unit. And so this is potentially when we say an explanation it would be an explanation for the quantization of electric charge if monopoles exist. Anyway this is a tip of the iceberg of a host of theoretical considerations that revolve around monopoles. And as I say they're certainly taken seriously today even though they've experimentally never been asserted. Why doesn't it necessarily have to have monopole-electric monopoles or even monopoles for other kinds of fuels as well that are certainly actively asserted. Alright, this is the main point of the Dirac analysis of the monopole notice we never had to solve the Schrodinger equation. We just found some constraints on it based on a single value this is the wave function of the H-transformation. Why does the solution of Dirac have to be an example of these things called the bonds? Why does it require only one monopole? Because if there were one monopole anywhere it would be with wave functions of the electron in the field of that monopole that the electron charge would have to be an integer monopole of that quantity in my own brackets. Why wouldn't that field be screened out of here? So are we saying that that field would not have an effect after you charge part of the solution? Well, let's just take our electron and bring it close to the monopole. If there are other monopoles around the box the charge is yes, you could be screened but it doesn't matter because all you need is that you can take a monopole and bring the electron close to it. So one mechanics that requires that the electron charge be an integer monopole of the inverse of the magnetic charge. The product of the electric is an integer monopole. How is the polarization affected if we make any monopoles? How different are these? Well, nothing would make any sense. You'd have to go back to the beginning and probably revise a lot of basic physics. That would be inconsistent with quantum mechanics. Since monopoles have never been observed of course, you know, something that's not there, we know it's not there. This remains to be seen. If you're surprised that our department was quite actively looking for monopoles at some time in the past, it was a history here at Berkeley of looking for monopoles. What do you do to look for them? Well, a monopole would behave in a magnetic field the way an electron would behave in an electric field. For example, the Earth's dipole field would suck monopoles into North Pole and then bring them out to South Pole, I guess. It would give you lots of monopoles through the three areas. So that's what the difference would be looking for. It would also have a big impact on in terms of the interactive electrons. They'd have a big, if you had a monopole moving through like a gas or a portion of the chamber or something, it would create a lot of ionization. I know at one point it would lose the electron with detectors that would look for heavy ionization tracks, and they were going to test it out by using uranium. Because uranium emissions would produce these large charged particles that would give heavy ionization tracks that would be how they would test out the equivalent number of devices. They never did the experiment, but that was an idea. Okay. So that's magnetic monopoles. That's all I'm going to say about charged particles in magnetic fields. And I would like to turn to a new topic which will occupy us for a while, which is the subject of rotations of quantum mechanics. And I want to get through this quickly as possible because there's a lot of physical applications of the phenomena. So before we get into rotations of quantum mechanics, I want to say some things about rotations of classical mechanics, or one might even say just classical geometry, in this case. So let's begin by plopping down an inertial frame in coordinates x, y, and z. This is a inertial frame. By the way, the definition of an inertial frame in quantum mechanics is that it's a frame in which Newton's laws hold. This means in particular that the trajectories of free particles are straight lines of which a particle moves with constant velocity. If that's your definition, then inertial frames are not unique because you can translate them and rotate them and even boost their relative one to another. But here we're just going to pitch one inertial frame. The same definition, by the way, also applies in special relativity that inertial frame is a frame in which free particles move on straight lines of constant velocities. Whether this is relativistic or non relativistic, it's not going to matter for the material I'm presenting today because it's only going to be considered spatial rotation. It's not going to fall in the time it's not going to enter in any way. All right, let's take this to the inertial frame. By notation, allow me to write the coordinates as either x, y, and z or x1, y1, x1, x2, and x3 as per convenience. So let's also take the new chapter as x hat, y hat, and z hat and write them in an alternative notation as e hat, y hat, 2, and e hat, 3. If we take a vector in this space and we call it b, b doesn't mean magnetic field, we're just doing some vector, then you can, of course, expand this as one of your combinations of the basis vectors, e hat, the piece of i times coefficients of b i, like this. Let's also talk about dot product. The dot product of two vectors of b times c. This is, of course, the sum of i of b i times c i. This follows from the fact that the even vector is not working normal. By the way, this assumption requires an assumption that the geometry of space is Euclidean, which would also make that assumption. These are all these assumptions that we're talking about here, can all be tested experimentally. In fact, it's known that the geometry of space is not really Euclidean, but it is a very good degree of approximation. So we use this. And by way of notation allow me also to write the scalar product of two vectors in kind of a bracket notation when the ground is brackets just b times c. This will be useful later on in clarifying about products. All right, now having done that allow me to define what we'll call a rotation operator. This is a rotation operator that acts on space. So I'll call it r and it maps space into itself. What I mean by space here is it maps points of space which can be identified relative to the origin of this inertial plane. And it is an operator that satisfies a couple of properties. One of these is that if you let r act on the origin of our coordinate system, which I'll call a script O, it acts on an O and it needs to go along. So the origin of this mapping and the second one is that if you let r act on any vector b, that its length is the same as the length of the original vector b, you can square this two or three points the same way if you want it as lengths are positive. So in other words, a rotation for us will be a transformation that takes points of space into themselves in such a way that the origin is left to vary and also in such a way that all the lengths are preserved. Now if you preserve length, you also preserve angles because angles can be represented in terms of lengths. And so this means that straight lines go into straight lines, triangles go into congruent triangles with the same lengths of the size of the areas, etc., etc. Alright. And these facts imply it can be shown that r is a linear operator that is to say that if you let r act on a linear combination, we call it bd plus cc, lowercase letters are numbers and nevercase letters this is the same as b of r acting on a capital B with lowercase c, r acting on a capital C. I will prove that you can actually prove it without too much trouble to think about it. I don't want to waste too much time or spend too much time on the axiomatic details of this setup because most of what I'm saying is quite plausible anyway but this is a fact that this is a linear operator. Now that means in particular that if I take the vector b and I write it as a sum in the basis vector what we call it bj times b sub j is a coefficient sum on j it means that if I apply a rotation operator of both sides to rb if you draw a picture here of the vector b, let's say a rotated vector called b prime so the rotation is moving from one place to the other so let's call this b prime c of r times b this is the same thing as the sum on j the rotation operator acting with the unit vector is ej times the coefficient of b sub j however, let's take the b prime and expand that also in terms of the basis vectors this is the sum on i of basis vectors e hat i times the expansion coefficient so called bi prime so notice the bi primes are the expansion coefficients with respect to our inertial frame we can solve for the coefficient we can solve for the coefficients of the rotated vector in terms of the coefficients of the original vector we just take this equation and dot both sides with e hat i because I'm going to pick out a single term of this equation and if we do this what we get is b prime i is equal to the sum on j of the scalar product bi scalar product of r e j times b sub j and now let's take this scalar product here and just define this to be what we'll call rij which is to say those are the numbers that make up the components of a rotation matrix it's a 3 by 3 matrix and so the result is we have the simple result of b prime i sum on j of our rotation matrix rij times b sub j this is the transformation this is how the components of a vector change or they're under-applied in a rotation okay so this introduces the rotation matrix notice that the rotation matrix there is defined in a manner similar to matrix elements in quantum mechanics that is to say this is the rotation matrix it's really the matrix element of rotation operated with respect to the unit vectors of our inertial frame of the rotation matrix now let me say some things now about the active and passive points of view whenever you deal with symmetries and quantum mechanics or anywhere else you need to have a racer so you can erase the board and finish things which you need to say so excuse me for moving our people obviously still this is a racer so there's two points of view in applying symmetries in quantum mechanics or anywhere else in physics and the one that I presented up here in this top board is called the active point of view the active point of view is one in which you use only one coordinate system what you do is you take some physical system here think of this vector as pointing to some part of your apparatus some physical system and you take your physical system and you rotate it to a new place or new orientation that's why it's called active because you're really taking the physics to change it the passive point of view however uses two coordinates so in other words here it's an old vector and a new vector but there's only one coordinate system that's the active point of view in the passive point of view which you have is two coordinate systems an old coordinate system and a new one where you take a given physical situation and describe it in two different ways by two different coordinate systems so let me show you how the passive point of view works first of all we've got our inertial frame x, y and z so this is all active up here then let me talk about passive down here now so you've got the original inertial frame like this and then we talk about a rotated frame that has axes let's call them x prime, y prime and z prime like this and if we have unit vectors describing the initial field the original frame we have we have transformed unit vectors describing the new frame so let's suppose let's write in this this way let's let the d prime i's the e i's of the prime are the unit vectors of the prime frame yes point of view we really use the fact that this is an inertial frame we use that fact when we start to apply this physically at this point I will just describe you in geometry and I can put this securely and have it out in terms of transformations on Euclidean R3 or something like that but the point is that you can't talk about points of space in terms of x, y and z coordinates unless you've got a coordinate system this is an inertial frame in any case from the passing point of view you have two frames in any case you don't have to use inertial frames to describe physics, you can have rotating frames and all kinds of things you don't bring all these effects up here in the summer but I'm going to be talking about just inertial frames now and I'll repeat that this actually applies also in special relativity as far as everything I'm going to say today is concerned because it won't be written in time so let's call the unit vectors of the rotated frame the primed unit vectors and let's let R be the rotation operator in the same sense as the board above which maps the unit vectors of the unrotated frame into the rotated frame the original frame and the old frame into the new frame alright let's take our vector B and let's expand it with coefficients Bi with respect to the old basis vectors and let's expand it again in a different way with coefficients Bi prime with respect to the new basis vectors and now the original problem is to find the components of the vector with respect to the new frame in terms of the components of respect to the old frame well on the right hand side let's plug it in the definition of E prime i this is the sum on i of our rotation operator applied to E hat i that's the vector multiplied times the E prime i in fact if you're allowing me since i is a dummy index let me change this to j in some summation here now that way the other hand is equal to E i times Bi so if I've got both sides with E hat i what do I get? on the left hand side I get B sub i and on the right hand side I get the sum on j and then I got a dot problem of Bi with R acting on Bj that's the same as you see here from the rotation matrix so it becomes R ij times B prime i and this is the passing law transformational law compared to the active transformational law the primes and the primes have been reversed but also the meaning of primes and the primes are different now while the unprimed Bi is the same in both places that represents the components of the vector with respect to the original frame from the active point of view that's the only frame there is and the B prime i is the components of the rotated vector with respect to the original frame here the B prime of j are the components of the original one note that there is only one vector in draw n but with respect to the rotated frame so there is a different interpretation of the symbols and also the matrix gets converted into its inverse in these two different interpretations so I just wondering now we're pretty consistently in this course we're going to as much as possible stick with the active interpretation but in other courses in lots of books you'll see the passing point of view so whenever you compare the two you need to be aware of the reinterpretation of the meanings of the symbols and also the matrices typically go under inverses we won't just pass this point to you very much the active point of view is really better now the next thing to say is that I'll go back to these properties of rotations that they preserve links as you see the idea of rotation operator acting on a vector here gets translated into matrix multiplication times the vector in a different sense the sense of the triplet of numbers over here but the formula looks the same the new vector as r times the old vector y in any case since the magnitude of a matrix squared of a vector is reserved under rotation that's definition of a rotation and in vector matrix life we can write it like this that goes right as r acting on v this vector squared is the same thing as the vector v transpose times r transpose times r times v put it in the same vector language it's the same thing by definition of rotation operator is the square root of v which is the same thing as v transpose times v and since v is an arbitrary vector here this has to hold for a fraction of an identity in terms of life since v is an arbitrary vector here this implies that r transpose is equal to the identity and this implies that r transpose is equal to r inverse which is I'm sure you know that r is an orthogonal matrix it's definition of an orthogonality and this in turn implies that r times r transpose has to say any other order is equal to the identity so these are all properties that follow the preservation of lengths from these properties it follows rather easily that the product of two rotations is another rotation the follows of rotations are invertible because the transpose is the same as the inverse so they always have a non-zero determinant you can find the inverse and it also follows that the inverse of a rotation is a rotation and these are the usual properties you need to establish these are the axioms or definitions of the group it means that the set of rotation matrices forms a group and this group is given a standard name in mathematical physics it's called O3 the O means orthogonal and 3 means they're 3 by 3 matrices it's just notation for the set of these matrices so the 7 3 by 3 matrices it preserves a distance so therefore angles forms the group O3 so we can ask for 2 however we can ask whether or not the rotation for example here are mapping an old set of vectors from one inertial frame into another set of a rotated frame whether this preserves a hand or this preserves a hand this is a frame so suppose the whole frame is right-handed in fact I meant to say that let's choose our original inertial frame to be a right-handed frame now the question is is this rotated frame right-handed and the answer is that it depends on the rotation matrix here's how this works out let's take this equation of our transpose r is equal to the identity and let's take the determinant of both sides the determinant of the product of matrices is the same as the product of the determinants the determinant of the transpose is the same as the determinant of the original matrix so if we take the determinant of both sides what we get is that the determinant of our square is equal to 1 which is the determinant of the identity so by taking the square root what we find is that the determinant of r is either equal to plus 1 or equal to minus 1 there's certainly an arbitration of matrices elements of O3 this determinant is equal to minus 1 if you want to see an example of that there's a matrix we'll see appearing in the future on how we need it's got minus 1s in the diagonal or else this is a matrix which is orthogonal but it's determinant obviously is minus 1 so it's in the second class here the first class where the determinant is plus 1 called proper rotation and the one where the determinant is minus 1 is improper rotation and this matrix P is an example of a improper rotation this is also called spatial inversion because it takes a vector, you can see it take but the x on the vector is multiplied by minus 1 so it takes a vector and it flips it through the origin in the direction this is in fact the level of classical spatial operations this is the equivalent of the parity operator we'll talk about later in the course I think it's a simple example of improper rotation now the set of proper rotations all by itself also forms a group and this is called SO3 where an S is it's just terminology it's supposed to be special and what special means in this context i.e. is that the determinant is equal to 1 plus 1 that's the meaning of the special so proper rotations are orthogonal matrices which determine this plus 1 now the improper rotations don't form a group because they don't have an identity element but it's true that if you take the union of these two sets and the improper rotations then you get the group of just 4 and 3 so actually SO3 is a subgroup of only 3 where the distinction characteristics determine this plus 1 so those are the main elements of the rotation of group both the S and 3 now next I'd like to say some things about the parameterization of the rotations there's actually two common parameterizations that are used one is called the axis angle parameterization and the other is the boiler angles I'll tell you about the axis angle parameterization first in some cases it's a simpler one so the idea of the axis angle parameterization I think is quite simple but implausible geometrically an axis is identified with the unit vector I'll call it N hat except you can extend it out and mark it there and the rotation about an axis by an angle theta is a rotation induces a right hand rule about the axis to rotate the points of space around it I think it's clear geometrically what that means this is proper rotations only it rotates about an axis like this let's call this rotation R N hat, theta and this in fact is what I mean by the axis angle parameterization of the rotation is specified by the axis and the angle now it's a fact that every proper rotation can be written in an axis angle form this is something I won't prove it's actually a fairly easy thing to do but I think it's also implausible if you think about what rotations do every rotation has an axis and then there's some angle that rotates about the axis to fill in the rest of the information about the rotation so if you prove that every rotation has an axis an axis is a vector the rotation axon doesn't really make it really easy to vary so an axis you see is a vector which is an eigenvector of the rotation matrix of eigenvalue plus one if you really want to prove this you start not showing that every rotation matrix has an eigenvalue plus one anyway this is the rotation we'll use for rotations in axis and angle form now it's pretty easy to work out a formula that tells us what happens if you take a rotation in axis angle form and apply it to an arbitrary vector which I'll just call u here it's like u be an arbitrary vector we want to write on the right hand side we want to write out a formula for this something being formed it's pretty easy to work out what this is just by doing some challenging drawing pictures now we draw the unit vector in hat which defines the axis and that's the formula you extended on outside I'm not saying that our vector u is some vector like this it's not necessarily a unit vector so it sticks out longer and we're going to rotate this by an angle theta about the axis and it's clear what that will do is take this vector and rotate it in a curve about the axis swing it out of the curve so the tip of the vector u is going to move on a circle like this and we're going to draw it well which is centered on the axis of the rotation we can take this vector u and we can decompose it first of all into its component which is parallel to the axis and then its component which is perpendicular to the axis like this we can call this part u parallel and we can call this part u perpendicular now first of all what is u parallel if u is in that body than u you just get component u along the direction n hat and roll point on the n hat as far as u purpose concerned what is that the left is the total u minus u parallel and if you work this out this is the same thing as this can be written this way is n hat crossed with u and the quantity crossed with n hat the triple product to the rule this final expression is the same thing as the one above let's look at this so let's continue with this picture if you rotate this u by some angle theta if anybody is u perpendicular like this or u perpendicular obviously it's not going to change u parallel but it's going to rotate u perpendicular to the plane here which is perpendicular to the axis it will move it over to the new position so it might be easier to visualize this if we look at it from above like this this is the total direction like this so here is our vector u purpose taken down here and then let's draw a vector of the same length it's sort of like a local x and y direction that I'm imposing here that's perpendicular to the plane and there's a circle like this so let's say this is the vector here which is n hat cross u cross n hat we must let this vector here this vector here is the same as the n hat cross u so on this picture here here's my local n hat in this direction if I take this and cross this with u it's going to give me a vector like that which is n hat cross u that's sort of like a local y axis and then crossing that and then crossing that and the n hat crossing that with n hat which is like a local x axis like this and then here's the local vector here I am with theta so if you draw a picture like that it's pretty clear what this answer has to be it's going to leave u parallel and vary it, it doesn't change that so the u parallel here is it's n hat times n hat dot of u and then we had just a two dimensional rotation in this plane perpendicular so it's going to be a cosine theta times u this is original u times this n hat cross u so it becomes plus cosine theta times n hat cross u cross with n hat plus sine theta times n hat cross u now I'm going to clean this formula up a little bit by expanding out that triple product in fact I did so already here's the expansion right here there's a component of the partial to u itself, there's a part of the partial to u to u parallel which is the same as this part of the u so if we put this all together let me put it out of here where I'm going to box it out as r of n hat on a theta applied to an arbitrary vector u is equal to u itself times cosine theta that comes to this u right here right there plus 1 minus cosine theta times n hat cross times excuse me times n hat times n hat out of the u and then finally plus sine theta times n hat cross u being formed for the rotation operator and that's the same form acting on an arbitrary vector now from this formula you can set n hat equals to x hat y hat or z hat and learn about the rotation matrices for rotations about the three axes and if you do that you can visit stuff that I summarized in the board before class, here they are don't bother to copy it down it's all over the notes but these are the usual rotation matrices about the three axes which I think you can see the structure of them for example rotating about the z axis is really rotation of the x y plane there's the 2 by 2 rotation matrix of the x y plane and it doesn't do anything to the z direction that's why you've got 0 supports here all three of these matrices block diagonalize in the same kind of way quicker now there's something else I can do with this formula which I'll put it down in the board let's do something else with this formula let's take the special piece in which the angle theta is small which is less than 1 the cosine theta is essentially equal to 1 1 minus cosine theta is 0 and sine theta goes over to theta that's what it was to the first order of theta anyway so for a small angle what we get is that r of the n hat comma theta acting like an f to u is just equal to u itself and then the second term is 0 and the sine theta just turns into theta times n hat cross with u if you want to be careful with what this formula means let me just say there's another term here which is the order of theta squared this is a formula that's valid through the first order in theta so this is a rotation an arbitrary rotation applied to an arbitrary vector for small angles and you can see that the small angle correction is a new vector which is a refogging of the original vector so that just means that any given vector if you rotate it by a small angle it's going to move in some direction which is a refogging to it it's pretty clear it's going to move it's not to be the increment is not appropriate in the original vector that's what this says it's an n plus u now there's another point of view on that infinitesimal this is what people might call infinitesimal rotation meaning rotation for a small angle there's another point of view on small angle rotations which is quite easy to work out so that is just to say if the rotation is small then the rotation matrix must be a small correction applied to the identity so this is again a small angle rotation and the idea here is that here's your rotation matrix here's the identity the correction matrix I'll write as epsilon times a the epsilon here is really only for psychological reasons just to remind you that this is a small correction I could just omit the epsilon so it's always small anyway, let's write it this way then given the fact that r transpose times r is the identity or r-r transpose is also the identity this is the same thing as identity plus epsilon a multiplying times identity plus epsilon a transpose and if you multiply this out you've got identity plus epsilon times a plus a transpose plus more than epsilon squared there was no more than epsilon squared but there were two but since the whole thing has to keep with the identity it means that the more the epsilon term has to vanish and so what this implies is that a plus a transpose is equal to zero or a transpose is equal to minus a or in other words a is anti-symmetric so if you form a small correction to an identity matrix to create a small angle in a decimal rotation a matrix is an anti-symmetric matrix and I guess I'll stop there so I'll do this point