 I want to call your attention to the fact that we have to leave this room because it's too big and bigger classes needed. So effective next lecture, which is on Wednesday, September 1st, we'll be moving the 325 with Hunt. So please don't forget that. All right, last time I was telling you about emission operators. I reviewed your definition of an emission operator, which actually comes in two different forms rather recently equivalent. And I just want to mention a very simple but important theorem that occurs in funny mechanics, which is that if you have two emission operators, then the product of those emission operators is emission only if they can use. Proof is so trivial that I won't go over it in class, but it is something that will occur many times in the course. Now what I'd like to have started dressing today, first of all, is the subject of eigenvectors and eigenvalues, which are eigencets and eigenvalues. Let's suppose we have an operator A, not necessarily emission, acting on some ket space, and we have some non-zero ket U, take it to be non-zero, so I'm simply going to let A act on U, which just multiplies U by a number, lowercase A, which in general is a complex number. Here, capital A is the operator and lowercase A is what we call the eigenvalue, as I guess you all know. And likewise, the ket U here we'll call this the eigenket, or sometimes eigenfunctions if we're talking about way functions or eigenvectors. Anyway, that's the eigenket and the eigenvalue. Now, the eigenvalue as I mentioned in general is a complex number, depends on the operator A. Now the situation regarding, or the theory regarding eigenvalues and eigencets is considerably simpler in a finite dimensional ket space than it is in an infinite dimensional one. So I'm going to begin primarily with, I'm going to begin by telling you about finite dimensional spaces, but I'll make a few comments as we go along about how situation is different in infinite dimensions. Here's the first thing to notice is that if you have a ket U, an eigenket, not supposed to be non-zero, because if it were equal to zero, this equation says zero equals zero, and it's not very interesting. We exclude the zero case. So if U is a non-zero ket, then you can think of it as a vector that sticks off somewhere like this, and the spacing question, here's the vector U, or the ket U. It's obvious from the eigenket eigenvalue equation, however, that if you multiply U by any constant, a complex constant, you get another eigenket. So this equation does not uniquely determine the eigenket, only to within a multiplicative constant. Or to say in another way, if I take this non-zero ket U and I multiply it by any vector I want, it could be a negative vector as well. But since these are complex vector spaces, it could be really complex numbers, including phases and so on. But in any case, the result is what you give is, from a given non-zero ket, you can construct a whole one-dimensional subspace of eigenkets. You could call this an eigenray, if you want. In some sense, this equation really determines an eigenray rather than a single eigenvector. Now, it may happen that there's, in addition to this vector U, this ket U, it may happen that there's another vector, or ket, which is linearly independent of the first one, which is also an eigenket of the same eigenvalue. Let's call this V, let's say. A V is equal to lowercase A V. The point is the A's here are the same, but the V's are different. And let's suppose the V is linearly independent of U, so that it sticks off in some direction like this. Then it becomes apparent that any linear combination of the U and the V is also an eigenket. And so the space of eigenkets you see becomes a two-dimensional vector space like this. In these circumstances, it becomes appropriate to speak of an eigenspace. And that's what we'll do. We'll call this the eigenspace. I'll call this E's of A, script E's of A. I'm generally using the notation script V of the entire ket space, the entire vector space. So with an A subscript on it, it stands for the eigenspace corresponding to eigenvalue A as a given operator A like this. And the result of this is we get a geometrical interpretation of this eigenvalue eigenket problem, which is that if you have an operator A, it corresponds to a certain privileged subspaces of the ket space. In general, we can think of two of them just trying to draw a picture of it. If I've got two spaces, here might be one, and then here might be another one going off like this. I've got one of them being two-dimensional, one of them being one-dimensional. Let's say this is E1 and this is E2. Our two different ket spaces corresponding to eigenvalues A1 and A2. You can imagine it looking something like that. Now, as you probably know, if there is more than one linearly independent eigenket, then we say that there's a degeneracy. And the number of linearly independent eigencats corresponding to a given eigenvalue is something we'll call the order of the degeneracy. So the way I drew it here, the order is two because it spans a two-dimensional subspace. The order of the degeneracy is just the dimension of the eigenspace. If the eigenspace is one-dimensional like I drew here for this E2, then we say it's non-degenerate, which means that the... it doesn't mean the eigenket is unique, but it's unique to within a single multiplicative constant. All right. So the, as we'll say here, the dimensionality of the eigenspace again is what we call the order of the degeneracy. So the order of the degeneracy is always one, but it could be greater than one. All right. Now, the... another thing that comes out of this is that given operator A is associated with a collection of numbers, generally complex numbers, which are the possible eigenvalues of A. That's got a terminology. It's called the spectrum of the operator A. So the spectrum of A is just the set of... it's just a set of eigenvalues A and A. That's all it is. This is the spectrum of our operator A. And it's usually viewed as being a subset of the complex plane. So if I... it's a complex eigenvalue plane, really. So if I have the real A and imaginary part of A as being the... give us the complex plane for the eigenvalue, then in general the spectrum... I mentioned a moment ago that I'm describing the situation in a finite number of dimensions, first verse simplicity. And if we're in a finite number of dimensions, the spectrum consists of a discrete set of points, which can be distributed here and there in a complex plane. That's the spectrum of the operator. All right. That's just a terminology, mostly. Let me give you some examples of the spectrum of operators coming from a lot of mechanics. Let's, first of all, talk about the harmonic oscillator Hamiltonian. The harmonic oscillator Hamiltonian. So let's write hx on u n. It gives us e n u n. And as you all know, the energy eigenvalues are equal to n plus 1 half h bar omega. And so the spectrum... we have to look in a complex energy plane. There's real energy, imaginary energy. The spectrum consists of points first 1 half and 3 halfs, 5 halfs, 7 halfs, and so on. As you see, it's a discrete spectrum and it's on the real axis. Here's another example. Let's take the free particle, h equals p squared of 2 n. And then here's what the spectrum looks like. Real energy, imaginary energy. In this case, there are eigenfunctions for all non-negative values of the energy. So the spectrum consists of the entire real axis like this, right abroad having... excuse me, the entire positive real axis, not negative. This is an example of what's called a continuous spectrum because it's not discrete points. Continuous spectrum only arises in infinite dimensions and I'll say more about it a little later, but right now I'm just giving you a preview of some of the things that can happen. Here's another example. Let's take the hydrogen atom, Hamiltonian. We have to talk about the energy spectrum, so real energy and imaginary energy. In the case of the hydrogen atom, you know the ground state, everyone knows it's minus 13.6 electron volts, so there it is, there's the ground state and negative energy. And then there are other bound states of negative energy and they pile up, they have an accumulation point at e equals zero. There's actually an infinite number of them that accumulate there. And then they don't actually emphasize this enough, but actually for positive energy there's a continuous spectrum. The hydrogen atom possesses a continuous energy spectrum for positive energy, which is similar to the case of the three particles as far as the spectrum is concerned. Anyway, that's the hydrogen atom. This is a mixture of it's creating a continuous spectrum as you can see. And then finally let me just mention the case of the unitary operator. The unitary operator is one whose inverse is equal to its submission conjugate, so this is unitary. And I'll write the real and imaginary parts there for the eigenvalues. It turns out the eigenvalues of unitary operators are always phase factors. And thus they lie on the unit circle in the complex plane. And if we are in a finite dimensional space, there's only a finite number of them, so they're just going to be discrete points which sit on the unit circle and they don't even be spaced or anything. They can be anywhere on the unit circle like this. So these are examples of what's meant by the spectrum of an operator. And you see in general, you may have to be viewed as in the complex plane. You see this in unitary operators. I think what I'm going to do is go over here and enter the next phase to this. So that's the concept of the spectrum. Now, the whole subject of eigenvalues and eigenquets simplifies quite a lot in the case of the special case of permission operators is permissioned. In fact, finding eigenquets and eigenvalues of arbitrary operators is a rather ugly process. And fortunately, we don't have to do it very often in quantum mechanics. But in the case of those submissions, there's a number of simplifications which all this here. The first one is, so here we want to think of A acting on U is equal to A U. The first one is that the eigenvalue A is real. You can see this in the case of the Hamiltonians, all of which have a spectrum if I can fit me on the real axis because the Hamiltonian is remissioned. But the unitary operator has eigenvalues in the complex plane. Anyway, for permission operators, they're real. The proof of this is elementary. All we need to do is to take the eigenvalue equation and multiply on the left by the bra U like this. We're going to multiply both sides. And so what we get is the matrix element A U A is equal to the eigenvalue of A times the scalar product U U. And then we take this equation and we apply the permission conjugate to both sides. The left hand side turns into itself A U A. If they were not remissioned, you get a dagger in the middle. But because of remission, it doesn't. So you get the same thing. That's what's special here. The right-hand side, you get A star times U U. And so the left-hand sides are equal, so the right-hand sides are equal too. And we're assuming that the eigencat is not zero because that's what we mean by eigencast. So this scalar product U U is non-zero. So the result of this implies that A is equal to A star. You've probably seen this proof before. That's very simple proof. All right. The second thing to say is is that the eigenspaces are orthogonal. And so the picture that I drew, it's still there up at the top of the board right there, it's actually not a very accurate picture. Not to the case of the permission operator. In the case of the permission operator, I had to draw it like this. So here's one eigenstate. Let's call it E 1. And then here's another one. Let's call it E 2 corresponding to eigenvalue A 2. And these are orthogonal, so there's actually a perpendicular to it. And it's like this. So if an ordinary operator, annual operator corresponds to a certain privileged subspaces of the ket space, in the case of permission operator, these are actually orthogonal subspaces. Now, to say these spaces are orthogonal means that any vector in one space, in any vector in one of these spaces is orthogonal to any vector in the other space. I've drawn this so that E 1 is two-dimensional, so it's a two-fold degeneracy, and E 1 is non-degenerative. The same picture applies in any case of any degeneracy. The proof of this is also quite simple. It works like this. Let me take, let's say that A x on U 1, and just the U 1 all over again multiplied by an eigenvalue of A 1, which I'll write up to the right now instead of the left. It doesn't matter which way you're writing. It's multiplying a number times a vector. And then, let's say there's a second vector U 2, which is equal to U 2 times A 2. But these are two different eigencets and possibly two different eigenvalues. Now, let's take the first equation and multiply the left by the bra U 1. Excuse me, by the bra U 2. On both sides. And then the second equation, multiplied by the bra U 1 on both sides. So we get this. And then let's take the second equation and apply a dagger to it. So we reverse things, U 2 A U 1. I don't dagger A because it's a mission. And then over here I get U 2, scale of product U 1 times A 2 times A 2, because we learned earlier that the A is a real, so I don't have to complex conjugate the A. But now you see, if we compare the first and the third equations, the left-hand sides are equal. And so the right-hand sides of the first and the third equations have to be equal also. The scalar product is the same. So this implies that A 1 minus A 2 times the scalar product of U 1 U 2 is equal to zero. And so, either the two eigenvalues are the same or all the scalar product is zero. Which is clear on this picture here, because if the two eigenvalues are different, so they belong to different spaces, then the vectors are orthogonal, just like I drew it here. On the other hand, if the two As are equal, then nothing can be said about the scalar product. They're not necessarily orthogonal, and that's clear too, because if I've got two vectors inside space D 1, they certainly don't have to be orthogonal to each other. Of course I can choose them to be orthogonal if I want to. This is related to the fact that a permission operator on a finite dimensional space always possesses an orthonormal eigenbasis, which is something I'll come to in a little while. But we don't have to choose a orthogonal inside one of these eigenspaces if you don't want to. By the way, eigenvectors are not unique. I mentioned that already, because just in the case of the single eigenvector, you can multiply by any non-zero scale factor. Of course, normally quantum mechanics would usually normalize them to make them have a unit norm, but even then, that leaves the phase factor of indeterminate. So even with that convention, you still got an arbitrary phase. If there's a degeneracy, it's worse than that, because it's clear from this picture that you can rotate this any way you want. There's a huge number of ways of choosing what's no eigenvectors inside this two-dimensional space, or any space with a degeneracy of higher than 1. All right. Now, there are further simplifications in the case of permission operators, and let me illustrate them by going back in this picture here for permission operators. In fact, since I'm now talking about permission operators, let me erase some of the rest of the stuff here which applied to other cases that I don't care about anymore. Let's move back. All right, so let's talk about the case of permission operators. We have two spaces here. The way I drew it, you'll probably imagine this is a three-dimensional space, so it's broken up into 2D plus 1D. You can see that here. The question is, what would happen if this were in 4D? I'd have, suppose I had an operator that had the two-fold degeneracy for one eigenvalue or one-fold degeneracy for another eigenvalue. It acted on a four-dimensional space, but there weren't any more eigenvalues. Well, it would be like we weren't enough dimensions here to fill up the space. Okay. Well, this leads me to a concept that I was talking about, the concept of the direct sum of two subspaces. The direct sum, which I'll be known by, in this case for this drawing, we're going to talk about the direct sum of these two subspaces, E1 and E2. What this means is the direct sum of these two subspaces is just a set of all vectors you can get by making linear combinations of vectors that come from either one space or the other space. So, in this picture here, it's a three-dimensional space, you can see. Another way of saying it is that if I choose a basis in E1 and then another basis in E2, and I just throw these basis vectors together, in this case it's giving me three basis vectors, and then consider the span of that. That's what we mean by the direct sum. So, the result of this is that the dimension of the direct sum of these two spaces, E1 plus E2, is going to be equal to the dimension of E1 plus the dimension of E2, dimension to add and then taking the direct sum. This is just the meaning of the direct sum. Now, if we have an operator and we consider this eigenspaces, and then we take the direct sums of all of the eigenspaces, the question is, does that fill up the whole space? Like I said a minute ago, maybe this picture here is in 40 space, so we've got a two-dimensional subspace and a one-dimensional subspace, but it's nothing out of the four. What happens then? So what that happens is that we say that the operator is not complete. So a completeness is a means to say that the eigenspaces fill up the whole space. Well, that was a nice result about our mission operators, which is that they are complete. At least this is true in finite dimensions. They're always complete. This is not true for non-remission operators, and it's one of the great advantages of remission operators. It simplifies a whole lot of things. In finite dimensions, permission operators are always complete. So it wouldn't happen in finite dimensions, namely four, that you'd ever get two plus one equals three of the missing subdimensions. It wouldn't happen, at least not for remission operators. In infinite dimensions, it turns out this is not necessarily true, that there are remission operators that are not complete. By the way, I don't know if I mentioned this last time, but in the mathematical literature, especially if you're able to term self-adjoint, if you're a purist about this, they aren't exactly the same. They depend on domains of definition and stuff like that. I will never use the word self-adjoint, but if you see it, I think it's the first pass of practical applications and quantum mechanics, you can regard them as being equivalent, because they are for most applications. Well, in any case, in infinite dimensions, there's a difference between in infinite dimensions, a remission operator is not necessarily complete. And so what we do is introduce a new definition, which is that of an observable. An observable, an observable is an operator which is remission and complete. It's not necessary to add the second part of here in finite dimensions, the infinite dimensions you should do this. Now the reason this name is given has to do with the physical interpretation of quantum mechanics in which one of the possibilities of quantum mechanics is that if you make a measurement of a physical observable, there's a big possible outcome of the eigenvalues of the corresponding operators. And if the operator were not complete, the operator is supposed to be remission, and if the operator were not complete, then it would mean that if you added up the probabilities of all the possible answers you could get on making a measurement of an ensemble of systems, you could get an answer which is less than one, because it would be some part of some vector that could stick off in these extra dimensions. This is absurd. It violates the physical principles of quantum mechanics. And this is why in quantum mechanics we deal not only with permission operators but actually with observables. These are the only ones we deal with really. One of the consequences of going back to this picture here, one of the consequences which is fairly obvious in a picture like this, is that an observable or any remission operator in a finite dimensional space, let's talk about finite dimensions now, is that any remission operator on a finite dimensional space possesses what we call an orthonormal eigenbasis. An eigenbasis just means, well, a basis means a set of vectors expand in space. And an eigenbasis means that there should be eigenvectors as well of the given operator. Well, it's clear this is true because if there's all you have to do since these different eigenspaces are orthogonal to one another, all you have to do is just choose an orthonormal basis of each of the eigenspaces. And then the collection of all of those becomes an orthonormal eigenbasis for the whole space. Here's some notation for this, talking about orthonormal eigenbasis. Let's suppose that A axon channel called nR brings out an eigenvalue of An times ket nR. Now, the R here is an index that we introduced through what we say resolved degeneracies. As to say nR, in the notation nR, we let R run up from 1 up to 1 up called d sub n. And d sub n is defined as the order of the degeneracy of the eigenvalue An. In other words, it's the dimension of the corresponding eigenspace. And so the idea is that these kets nR, the n indicates the eigenspace and the R is a label of an orthonormal basis chosen inside that space. And as I mentioned, there's a great little barbituriness in how you choose that basis. In any case, let's use this notation for the eigenvalue. Unless the eigenvalue doesn't depend on R, it only depends on n. Well, if you have this, so this is just the notation for the eigencats, the orthonormal eigenbasis, the corresponding to this, we have a resolution of the identity that says that 1 is the sum of n and R of the outer product of nR with nR. This is just the normal thing you get when you write a resolution of identity in terms of the orthonormal basis. Turns out you can also write the operator A in terms of the orthonormal basis. It looks like this. It's the sum of n and R of the eigenvalue An times nR of the outer product of nR. That's just like the line above, except you inserted the eigenvalues under the sum. How do I know these two operators are the same? Well, you see, if you have an operator and you know what it does to a set of basis vectors, then you know what it does to any vector because of linear superposition. So the way you prove that these two sides are equal is you allow them to act well, what basis vectors should you choose? Well, the obvious answer is these basis vectors. You put those in there and you can easily check that both sides give you the same answer. By the way, I should have mentioned something here, which is the orthonormality conditions of these vectors. These are the following that nR scale of product within prime R time is prime of the delta n prime prime of the delta R R time. The two prime of the delta is fair for different reasons. The reason it's a orthogonally independent prime is because of the property over here that eigenspaces are orthogonal or in this picture here that we have long different eigenspaces that are orthogonal. Whereas the orthogonality in R prime is because of our convention of choosing an orthonormal basis inside each of the eigenspaces. In any case, this is the complete orthonormality relation and that's what leads to this resolution of the identity in terms of its eigenvalues and eigen and eigenbasis. Right. Now I defined the direct sum of two sub-spaces up here. I think I hope it's clear what that means. Let me just mention now that the completeness property can be expressed in terms of the direct sum. It says that if you take the direct sum of the eigenspaces, all of them, you will get the entire ket space. So completeness in terms of direct sums. Just a way to say it. As I mentioned, most of the theorems about eigenvalues and eigencets and so on are considerably simpler in finite dimensions. Let me say something about the infinite dimensional case. So you can get an idea of some of the things that can arise there. You can actually see some of it already. You can continue a spectrum in the infinite dimensional case. It's always a finite spectrum. A discrete spectrum, I should say. To illustrate what can happen in infinite dimensions, let's talk about the space of wave functions in one dimension which we know a lot about already. And let's talk about the momentum operator, which I'll denote by p. Of course, minus i h bar d dx in the shorting of representation. And I'm going to put p without a hat in the eigenvalue of the operator. So the eigenvalue eigenfunction problem for the momentum looks like this, is that p acts on some function, let me call it use of p of x, which is supposed to be an eigenfunction of the momentum operator with eigenvalue p. So we let p act on it and it has to just multiply by p. This is an eigenfunction equation. The momentum operator is minus i h bar d dx which becomes d dx and u d of x which is a simple differential equation. And when we solve it, we find that u p of x is equal to e to the i dx over h bar. And that's it. Okay, so within a most indicative constant that's the eigenfunction. Now, this differential equation has a solution of the momentum eigenvalue, not only real ones, but also complex ones. There's a solution for any value of p. Does this mean that the spectrum of the momentum operator is the entire complex plane? Well, the answer is no and part of the reason is that this is in the continuous spectrum and I need to tell you some things about that. But let's just notice that if p had a non-zero imaginary part of real p imaginary p. If you imagine an eigenvalue which is off here in the complex plane with a non-zero imaginary part, then this function would diverge exponentially either for positive x or for negative x and it would rather badly behave. It certainly wouldn't obey the boundary conditions of the used to in quantum mechanics where you want wave functions to die off in infinity. Actually, what you really want is the wave function to be normalizable functions that are physically meaningful because the probability of finding a particle somewhere has to be one and so only normalizable wave functions are physically meaningful. Well, in any case, if p is a non-zero imaginary part it's certainly not normalizable. What happens if p is real sitting on the real axis? Well, then this becomes the familiar sine of e sine of cosine e to cosine x which, however, does not die out as x goes to plus or minus infinity. In fact, there's absolute value of 1 everywhere. So it's still not normalizable and it still doesn't represent a physical state and it still doesn't belong to Hilbert space. And so what we have to say is that if you're looking for normalizable vectors or normalizable states that belong to Hilbert space the momentum operator doesn't have any eigenfunctions at all. Not much less an eigenbasis. So how do we deal with this consideration? Well, I'm just going to describe to you how one normally thinks about this. In the first place we restrict consideration to just real values of the momentum. This of course is the, this is the idea follows the case of finite dimensions in which permissioned operators have any more eigenvalues. If we do that then we can take this eigenfunction u p of x divided by the square of 2 pi h bar. This is normalization constant. But the reason for doing that is if I take the scalar product of u p let's say u p prime I'm, the idea here is to investigate the orthogonality relations of these eigenfunctions. The, let's just abbreviate this by the way my writing is p p prime dropped to u where we just label the eigenfunction by value. Well, this is the same thing as the integral over x of u p of x complex conjugating times u p of u p prime of x. That's equal to the integral over x divided by 2 pi h bar of e to the minus i p minus p prime x over h bar. I ask this out of the way here. And that is the standard representation of the development function p minus p prime. So to summarize this in one spot, we have the scalar and product of p p prime is a direct delta function of p minus p prime. If we divide by this square root of 2 pi h bar is a normalization, that's why I did this to make this delta function come out. So what we see is that these eigenfunctions of the continuous spectrum, that's what we're getting here, are normalized in the delta function sense instead of a chronic delta sense. Moreover, the axes are complete in a certain sense also. They're complete in the sense that an arbitrary wave function psi of x can be represented as a linear combination of these u p's. However, since the eigenvalue p is a continuous variable now, we have to use an integral instead of a discrete sum. And so we write this as an integral over d p. Let's call an expansion coefficients all called pi of p because there exist expansion coefficients such that any psi of x can be represented this way. And the answer is yes, because you write this out more explicitly. It becomes the integral d p over the 2 pi h bar of pi of p. Well, I'll put u p of x first, u to the i p x over h bar under pi p. And so what you recognize is that psi of x is the Fourier's theorem that says an inverse which is integral d x over the square root 2 pi h bar e to the minus pi p x over h bar times psi of x. And so the answer is the expansion coefficients exist. They're nothing but the Fourier transform of psi or as we say the fun mechanics of the momentum space wave function. And in this sense these eigenfunctions continue as a factor marked complete. That's the situation and part of the situation with the momentum operator in a continuous spectrum. I think I want to say one more thing here which is to take this last formula here and let's just reinterpret this. Notice that this becomes the integral over x and if I take the 2 pi h bar under e to the minus i p x bar that's the complex conjugate of our eigenfunction. So let me write this as u p of x that's the complex conjugated multiplied times psi of x and this in the draft notation is the scalar product of the basis function with psi so I'll write it like this as p psi and let me accumulate that in a single box too because that's interesting it says this pi of p is the scalar product of p psi and so the expansion coefficients which we otherwise call the momentum space wave function are just the let's do something similar with the position operator again in one dimensional wave mechanics so let's call x hat is the position operator which is multiplication by x but the hat is the operator and x is the number by the way as in the side here I'll put it over here somewhere there's some terminology that goes back to Dirac which I'll use occasionally in this course Dirac distinguishes a q number and a c number and you'll certainly hear this if you're in a quantum mechanics business very long well a q number is just Dirac's a notation for a terminology for an operator and a c number is a terminology for an ordinary number it has to say an ordinary complex number I remember this because c stands for complex but anyway the distinction here is that x hat is the q number which is the c number here alright and let's let x and 0 be the eigenvalue and let's let f x 0 of x be the eigenfunction so if you want x hat acting on f x 0 you can put x 0 times f x 0 on x like this however since x hat just means multiplication by x this is the same given x times which implies that x minus x 0 times f x 0 of x is equal to 0 which implies that either x equals x 0 or else f x 0 of x is equal to 0 this function must vanish everywhere except that x equals x 0 it can only be nonzero at one point it's what this is saying well the interpretation of this is that we interpret it this way by saying f x 0 of x is the delta function of x minus x 0 the function is not a real function but it can be interpreted as a limit of real functions and in some representations of it you see this looking like something that becomes concentrated at the point yet we take this definition of f x 0 of x as the eigenfunctions of the position operator then we can let's take the scalar product of f x 0 f x 1 this is the same thing I'll just write this for simplicity by writing just down the eigenvalues instead of the full function of x 0 x 0 but this is the same thing this is integral over x of delta of x minus x 0 complex conjugate times delta of x minus x 1 well the delta function is real so the star doesn't do anything and if you do the integral you get yes so comparing to what we have above we have something similar in the extra representation the scalar product of x 0 x 1 is the delta function of x 0 minus x 1 also there's other simple identities that we have if we have psi of x can be written this way it's the integral over I guess I want psi of x 0 it's the integral of dx of delta of x minus x 0 times psi of x is one of the properties of the delta function but this can be also written as the integral over dx of f x 0 of x complex conjugated times psi of x which is therefore the scalar product of x 0 with psi and so we find the wave function psi of x 0 is the scalar product of x 0 with psi and I ask you to compare these two results with the momentum results of the above you see in both cases the wave function is just in fact just the expansion coefficients of the abstract state with respect to the basis which is an eigenbasis of an operator this is one reason for not regarding wave functions and configurations based on the central object rather than the abstract vector is really the central object these are just different choices of components by different choice of basis choice of basis is an arbitrary choice alright now there's finally a couple more relations related to this which I'll just skip over the details and just summarize the results these are resolutions of the identity like both the momentum and the case of position looks like this is as if one is the integral over p of the outer product of pp we've also one is the integral over position of the outer product of x and these become the resolution of the identity relation in the case of the continuous vector here's another couple of remarks about the continuous vector if we have a single value of p let's speak of momentum let's take a single value of p of the momentum axis here single value of p here's the real part p here's the measure of p to some point in the spectrum of p can we talk about the eigen space corresponding to that momentum value we can certainly do that in the cases of discrete spectrum and the answer is no because the eigen vector corresponding to this value doesn't even belong to Hilbert space these resolutions of the identity we have here are work in the function sense but the vectors in question don't lie in Hilbert space this is the difference between the continuous spectrum and the discrete spectrum and there is no such space of the Hilbert space corresponding to a single value of momentum on the other hand if I want to take two interval I to be able to interval p0 and p1 then it does make sense to talk about the subspace corresponding to that interval and later on write down an explicit formula for this but I think you can believe this on the basis of what you know about signal analysis you know about band limited signals or filters in electronics that remove all the frequencies except outside a certain frequency while there is the acceptance of some kind of a filter it accepts wavelengths that lie or momentum that lie between p1 and p2 and reject everything else that's actually a projection operator the projection one to a subspace corresponding to a band in the continuous spectrum so in the continuous spectrum you get subspaces of the Hilbert space corresponding to intervals of the continuous spectrum quite easy as if I just do an example which is hydrogen atom let's let the as usual hydrogen atom language let's let NLM be the wave functions of the outstate wave functions of the hydrogen atom these are orthonormal and send them NLM together product with N prime all prime N prime LL prime N prime the eigenvalue only depends on N NLM X on this because that's E N I'd say LL where I think you know that E N is proportional to minus 1 to 2 N squared but the point is it depends only on N and not on the 200 quantum numbers and this means that the the ion spaces of the hydrogen atom and NLM are degenerate X which is still there I guess it's gone now the R in X I introduced earlier that just indicates any effect of arbitrary choice of orthonormal spaces of each of the energy ion spaces however the hydrogen atom also has positive energy ion states I'll denote those by E L M like this and this is the case where E is greater than equal to zero it's a continuous spectrum now there's no discrete analysis in the following way is that E in E L M scalar product with E prime L prime L prime this is direct delta function in the energies times the product of delta in the L's and the product of delta in the L's and if you do this then so this is the worth of malady relations actually they're part of them here's the worth of malady relations for the bound states here's for the unbound states here's another one too is that the unbound states are zero the bound states and the bound states and the unbound states are orthogonal so three formulas here these are the worth of malady relations now here is the resolution of the identity for the hydrogen atom we have the sum on E L M which runs over all the bound states of the outer product E L M and then we need an integral from zero to infinity over energy and then a sum on L M E L M outer product of E L M this is what it looks like there's also you might call I also mentioned that the operator itself can be written in terms of it's eigen states if we do this for the hydrogen atom and it looks like this all we have to do is put in the energy driving values under the sum of the bound states but also for the unbound states what it looks like for me based on the energy here regarding regarding observables in quantum mechanics is that two observables in theorem I'll write it out observables A and B possess a simultaneous eigen basis only if they commute it's an important theorem for quantum mechanics because we're the question of commuting observables is closely related to the question of the possibility of making simultaneous measurements of two different observables without one measurement interfering with the other it's related to the uncertainty principle and things of that sort so the theorem says that A and B observables A and B possess a simultaneous eigen basis only if they commute now an eigen basis as I mentioned earlier is a basis which is also consists of eigen factors of A or eigen eigen kets of the given operator what I mean a simultaneous eigen basis is I mean a basis that's eigen kets of both operators A and B simultaneously so every eigen vector is an eigen vector of both A and B how do we see that this is true probably the easiest way is to go back to this picture that we're thought of as eigen spaces so let's call this B and here and here a different one called the I'm only going to concentrate on B so here's the idea let's suppose that let's suppose that I have A acting on a ket u gives me a sub n u that means that ket u lies inside this eigen space E n it means with A at son all it does is it multiplies by a complex number so it gets longer it changes its direction a minute but it stays inside alright now what about if we let the operator B act on a vector that lies in this space E n what can we say about that well let me put parentheses around that so in particular B is going to act on this vector what will it do could it possibly lift it out so that it sticks out in this space somehow that's a question so because if I take B u and I let A act on the left hand side this is of course the same thing as A B acting on u but because B and A are assumed to commute that's the same thing as B A acting on u and then the A acts on u and brings out the eigen value of A n which is just a number so this can be written as A n times B acting on u and the result of this vector B u you see is an eigenvector of A with the eigenvalue A n that means it belongs to the space E n and so this picture I drew is wrong instead B u has to be some vector that's also inside this space so this is an example of what we call an invariant subspace we say that the eigen space of the operator A is an invariant subspace of the operator B cuts it into another vector in the same space the space is also an invariant subspace of A because A acts on an eigenvector in this space and just multiplies by a scale factor which certainly keeps it in this space so the eigen space of A is actually an invariant subspace of both operators if they commute now if you have an invariant subspace of an operator you can define what's called a restriction of the operator to the subspace but if the subspace is not invariant of the operator then it doesn't make any sense to talk about a restriction because if I've got a vector in a subspace and I let an operator act on it and it brings it out of the space then it doesn't make any sense to say what is A restricted to the space but anyway we do have an invariant subspace here of both A and B let's call the restrictions of these let's put a borr over this to indicate the restrictions the operator A is actually quite easy it's just the eigenvalue multiplied times the identity and restricted to this space the operator B is not so simple to write but one thing we can say about it is that it's a permission and so the result is that B the intermission on this subspace possesses an orginal basis on this space so there's a B basis here which is more than normal and that B basis is obviously an eigenbasis of A as well therefore you get simultaneous eigenbasis of the two operators okay well that's all I'm going to do for today to remind you again we meet next time in 325 La Hante not here as soon as the game is over