 are new here, and I'm sorry about the early hours. I really should apologize for physics 209, which was the previous course. Of course, you might think you should blame that on Martin instead of me, but since I did the course assignments, I'm the one that put him in the eight o'clock slot, so it's really my fault. And I'm sorry about the eight o'clock hour, but it really works out best for the scheduling of GSI assignments and things of that sort to do at the day they have. Professors hate it as much as the students do, I think. All right, so this is the graduate course in quantum mechanics in the physics department, and my name is Robert Liljohn. My office is 449 Berge, which I'll put down here also. And the, I want to say, first of all, that if you received an email from me yesterday, it's because you were already enrolled in the course, and if you read that email, you'll know that it directs you to this website, which is the website for the course. If you did not receive an email from me yesterday, then please copy down that web address and go to that website, make sure that you can access it. Do it right away because it's important to get organized. And read what's on the website because it contains the organizational and logistics information about the course, things like grading policy and stuff like that, which I'm not going to go over in the course, in lecture. So that'll be the place where homework assignments will appear and all this stuff. You really need to know about that if you're going to take the course. Another thing is that I'm maintaining an email mailing list for the course, in which I hope to have all of your email addresses on a big long list, which I use to send you information if there's some emergency or if I'm sick and can't come to lecture or if there's an error in the homework and you didn't know about it, I'll send it to you on that email mailing list. So if you're taking the course, especially for credit, if you're auditing, you can sign up for this email mailing list too. I don't care. But if you're taking the course, you certainly want to be on the email mailing list. Now again, if you received the email from me yesterday, you don't need to do it because you're already on it. But if you did not receive an email, then please send an email to this address, physics221, invigorated.berkeley.edu, and just say, please put me on the email mailing list. You could also use that email address to ask me questions, but don't rely on it too much. I mean, don't think that an hour before the homework is due, you can send me a question and you'll get an answer because I don't necessarily answer that quick. It's better to make use of the office hours and the discussion sections for that purpose. Okay, so please make sure you're on the email mailing list. If you drop the course and don't care anymore about the course, send me an email to that address and say, we're moving and I'll take you off. All right, so that's all I want to say about organizational logistics matters. This course is a, well, it's a two semester course. The first semester of 221A requires as a prerequisite full year of undergraduate quantum mechanics. And if any of you have not had that, I ask you to basically don't have my permission to take the course unless you come see me to get that permission. So please do so if you haven't had a full year of undergraduate quantum mechanics. Also, if you're an undergraduate, I'd like you to come ask my permission. I just need to talk to you about things because you make sure you know what you're getting yourself in for. All right, one of the purposes of 221A is to go back over this full year of quantum mechanics that you've had already and to organize it in a more logical and consistent way, paying attention to fundamental principles that maybe you didn't have time for before. And that's part of the purpose. Part of the purpose also is to review the material. We'll also introduce some new material in this semester too. So as part of this program, I want to begin my first lecture with first several lectures in fact on the mathematical methods or mathematical formalism and quantum mechanics. Most of this will be all the mathematics we'll cover in this semester. Later on we'll have to come back and do a little more. But this is most of it in this first week. And the mathematics mainly deals with vector spaces, in particular Hilbert spaces. Hilbert spaces is a particular kind of vector space and you can read the official definition of the Hilbert space in math books. I won't go into that. Let me just say that Hilbert spaces are the spaces of wave functions or let's say this, that the spaces of wave functions which occur in quantum science actually are Hilbert spaces and so we'll call them that. In any case, I want to begin with mathematics of Hilbert spaces. Now to get you oriented in this, I'm going to call on your experience with wave functions and things like that. I assume you know quite a bit already. So to begin with, and to get oriented, let's talk about wave functions and a one-dimensional problem. Let's see, I want to have this center board. This is the center board, yeah, okay. So let's begin by talking about wave functions psi of x and a one-dimensional problem. And you know pretty much what this is. You know in particular that swear of the wave function is the probability density for making new measurements is the position on the particle. Likewise, I hope you know that there's a momentum space wave function by a p and it can be given by what's essentially a Fourier transform of the position or configuration space wave function as it's called at the top, e to the minus i p x over h bar psi of x. And this also has a square of five p absolute value of square which represents physically the probability density in momentum space for making new measurements of momentum. Now, so there's only two wave functions psi of x and five p and they're related to one another by invertible transformations. That's really a Fourier transform. Now, let's suppose in addition that we've got some energy eigenfunctions, let's say we call them u in of x is equal to e in of u in of x for some problem. And let's suppose that they're discreeting on the generated eigenfunctions for simplicity. Then you know you can take an arbitrary wave function psi of x and you write it as a linear combination with expansion coefficients I'll call c in times the energy eigenfunctions. And you know that the expansion coefficients are given by an integral over x which is u in of x complex conjugated times psi of x. These are linear transformations that convert you back and forth between the psi and the coefficient c's. Well, these coefficient c's form a sequence that's called c1, c2, dot, dot, dot like this. And this sequence of coefficients can be thought of as quote unquote the wave function in energy space. Position space, momentum space, energy space. There are three different types of wave functions. Moreover, the square of these coefficients c in the square and I hope you know this is interpreted as the probability of making a measurement, when you make a measurement of energy it's the probability of getting the eigenvalue e n which is associated with that coefficient. So these are, what we have here actually are, these can be thought of as three different wave functions which describe the same physical system, the same state of the physical system. And from a certain point of view these are all equivalent descriptions. This is certainly true from the mathematical point of view because everything that one, all the questions that one wants to ask in the physical sense about a quantum system and ultimately boils down to probabilities of making measurements, all of these calculations can be done in any one of these three wave functions or as we say representations, position, momentum and energy representation of the wave function. And so from this strictly mathematical standpoint there's no reason to think that any one of these wave functions is privileged or special over any of the others. Now there's a psychological bias and think that the psi of x wave function is the privileged one and that it's better than the other somehow or that it's a starting point. Why is this so? It's partly because we're used to thinking about wave functions in physical space or functions scalar fields, field, scalar, vector fields in physical space, electric fields, pressures, pressure of fluids and things of that sort, things that we can measure in physical space. And so why isn't psi like that? Well, when Schrodinger was starting to write down the Schrodinger equation that's how we thought of psi as physical field in physical space. However, it's pretty clear that that's actually an interpretation that doesn't hold up because physical space is not the same as configuration space. Configuration space is basically x space. Why? Not in general, it's not. Why? Because you may have a wave function for two particles, in which case the wave function depends on the positions of both particles. So it's not a wave function in physical space, it's really a wave function on two copies of physical space. Configuration space you see is not the same as physical space. So that's one difference. Another difference is that if you get down to actually thinking about measuring a wave function psi of x, let's say for a single particle, let's go back to that, say that, just a single particle on physical space, think about measuring that. We'll talk about that later in the course. You'll find out the measurement process is really very different from the kind of measurements you would make of an electric field or a pressure and a fluid where you can just go to a point and measure it, measure the number. It's not so simple with psi. Certainly it's not hard to measure the square of psi in principle it's not hard because it's a probability density, make repeated measurements and just count how many particles lie in a small interval like you do for probability. The finding the phase of the wave function is a different matter, it's much more subtle. There's all kinds of subtleties that don't occur in the measurements of classical fields. So for many reasons, it's best not to think of psi of x as being the final physical space but rather on this x space or configuration space. All right. Now, so with that background in mind then, it's desirable in developing the formalism of quantum mechanics to use a notation which does not prejudice the choice amongst these observables. By the way, clearly these wave functions, clearly these observables are associated with clearly these wave functions and three wave functions are associated with three different observables, position, momentum, energy in this case and one can come up with other observables, angular momentum and anything else you wanna measure and each one of them has its own wave function. And these are all equivalent to those equivalent descriptions of the same reality, the same quantum reality. So in view of this democracy or covariance of quantum mechanics under changes of what we call representation, position, momentum, energy representation and others, this is a, these covariance principles were worked out in the very early days of quantum mechanics by notably von Neumann but others, Dirac and so on. In view of this covariance amongst these different descriptions, it's desirable to have a notation for describing the states of systems that doesn't prejudice the choice. And so that's what the Dirac notation does and that's what I want to talk about now. So in the Dirac notation, we speak of kets or also called ket vectors because they are vectors and a ket is denoted by this kind of notation like this and it stands, supposed to stand for any one of those wave functions up there. And it's customary to insert into a ket vector an identifying symbol such as psi, I'll put a psi in there. It doesn't mean that it stands for psi of x, it really stands for all of those wave functions. But it's an identifying symbol of the ket. And the ket vector belongs to a vector space which I'll denoted by a script E like this. It's a complex vector space which is called the Hilbert space. And as I say, I won't bother you with the official mathematical definition of the Hilbert space, although there is one, but rather just let's make a remark that Hilbert spaces are in quantum mechanics so the spaces of wave functions and that'll be enough for us. That's how we'll use the terminology. Anyway, it's a ket vector in this space. Now then, just to say a few words about the physical possible and to quantum mechanics, we'll get into this in more detail in about a week. Physical possible and to quantum mechanics assert in the formulation I'll present in this course they assert that by corresponding to every physical system there is a vector space, in fact a complex vector space and Hilbert space which is associated with the system and it gives ways of knowing what the Hilbert space is based on the kind of measurements you can make on the system. We'll go into this in more detail later on but in particular there's physical principles that tell you what the dimensionality of space it should be. You'll find for example for spin systems the space is finite dimensional and for systems that include position variable it's an infinite dimensional. In any case, Hilbert spaces may be infinite dimensional. To say that this is a complex vector space means that mathematically speaking you can form linear combinations of vectors with complex coefficients. That's what a complex vector space means. A real vector space means you're restricted to real coefficients when you make linear combinations. So in particular it means that it's meaningful to talk about linear combinations of ket vectors like this, psi with C1, psi with C2, psi 2. This is supposed to also belong to the Norbert space E where C1 and C2 are complex numbers all right this by saying that it's belong to the complexes using this mathematical symbol for the complex numbers. All right, now crudely speaking this is similar to the idea of linear superposition of electric fields in physical space. But remember the wave function really isn't in physical space so this is something else but it is certainly linear superposition of vectors. Moreover, there's complex coefficients and you might ask why do we need complex numbers? Why can't we do it with just real numbers? We'll see later on when we study spin system that it's impossible to do it. You can't satisfy the physical postulates of quantum mechanics with only real numbers. Okay, now people that study chemistry sometimes take courses in chemistry and they lose this because of most of the problems in chemistry involved in real wave functions so they have real coefficients but actually they have to be complex, all right. All right, so this is the beginnings of the ket notation and the ket vectors. By the way, the physical postulates that I'll say you'll have to speak about later on describe how the physical state of the system can be associated with a given ket vector. I'll just say one thing about that now however which is that the physical state is actually not associated with a ket vector but rather it's associated with what you call array. It's associated with array in the older space E. What an array is is it's not just the same as a vector if you make the origin in the vector space call it O and I draw a vector here like this is our vector ket vector psi. What the array is is it's a set of all vectors that can be contained by multiplying psi by any complex number. You can think of it as being all the vectors that lie in a one-dimensional subspace with it. The magnitude is scaled, also the phase factors get scaled. Array is a ket vector to it. It's a ket vector, let's put it this way. The state is a ket vector modulity overall normalization phase which doesn't have any physical significance. So the physical state is independent of normalization phase. So in any case the physical state corresponds to a whole array. One-dimensional subspace of the vector space. All right so that's a little bit about ket factors. Now you're no doubt roughly familiar with the idea that ket like psi or somehow supposed to correspond to wave functions psi of x that's the idea. Before I go into that I want to say that the program I'm trying to follow here is to start with the physical postulates and quantum mechanics which lead to ket vectors. We won't talk about wave functions. In effect what I'm gonna do is develop mathematics and ket spaces and then later derive them and derive wave functions from the physical postulates. So I will talk about wave functions at first except for purposes of illustration as I did above up here. As you know about wave functions already. Nevertheless behind the scenes we're thinking of ket spaces as wave function spaces. And I'm sure you know that ket is supposed to correspond somehow to wave functions. And I suppose you also know that bras that look like this are supposed to correspond to the complex conjugates of wave functions. Now so I want to introduce a subject of bras now. The question here is how do we, if I don't talk about wave functions how can I talk about a bra which is supposed to be complex conjugates? The answer is the following. As we say that a bra or a bra vector because a bra also is a member of a vector space. Just first of all is the notation. It's given in rack notation like this. You're the kind of a reverse kind of a reverse of a ket. And as in the case of a ket it's convenient to put in an identifying symbol for the bra just to identify it and label it, distinguish it from other bras. And what is a, what is a bra vector? It is a bra vector is a map described this way. This is it's a linear map as a matter of fact. It takes us from the ket space over to the complex numbers. And what that means is that if I have a bra which I call alpha like this it's something that can act on cats like this. And what does it produce? It produces a complex number. And you can write it like this way if you want. But we usually don't write it that way because it's too many parentheses. We usually just drop the parentheses and write it this way as alpha psi. So this is just the value of the linear map alpha acting on the ket vector psi. And then I guess I did do this wrong because I was supposed to do it another way. All right, let's do it this way. It's pretty easy to see that the bras is linear operators. And by the way, you're used to linear operators like Hamiltonian and the momentum and so on operators in quantum mechanics. Bras are not operators like that. Those usual operators like Hamiltonians in quantum mechanics are operators that act on the cats and what they produce is another cat. They map cats and cats into cats. A bras is a linear operator that's cats in the complex numbers. It's a complex value linear operator. It's kind of a simpler linear operator, actually. Now, bras vectors form their own vector space. You take the set of all bras like this. It's formed in its own vector space. It's easy to see how to multiply bras by scalars. You just multiply the answer by a scalar. And it's easy to see how to take linear combinations of bras. You just add up the answers for the different bras. I'll be explicit about this. C1 alpha 1 plus C2 alpha 2 is a linear combination of bras. And if we allow this to add up the cat, the answer is C1 alpha 1 plus C2 alpha 2 sign. And so this is, I'm just saying the parentheses is another bra. So, indeed, they form a vector space. And the set of bras like this is sometimes called the dual space to the original cat space. And it's done this way with an E star on a star indicating it's a space of bras instead of a space. If you're doing wave functions, you would say, oh, this is a complex conjugate wave function, just a wave function, so it belongs to the same space as the original space of wave functions. That's a simple point of view. And in fact, it turns out it moves deeper and more convenient, ultimately, to regard this as a separate space, which we call the dual space. This still doesn't tell us what's the analog of the complex conjugation operation. How do we complex conjugate a cat? Well, we don't complex conjugate a cat, but you do something similar in work like this. It relies on the fact that the cat space is, as to say, no work spaces, possesses a scalar in product, a way of taking scalar products as vectors. And this is related to the metric, which I'll tell you about now. The metric is a function, which I'll call g, and what it does is it takes two copies of the cat space and maps it into the complex numbers. This is map notation. If you're not used to it, what it means simply is that g acts on two cats, which maybe I'll call psi and phi, and what it produces is a complex number, like this, x of the pair of vectors. And this metric, g, is supposed to have certain properties, which I'll try to list here in case I'm going to go around a space. Properties of the first property is there's really two properties at first. One is that, call it 1a, is that g is linear, and the second opera image I'm calling phi there. So what that means is that if you let g act on psi, comma, phi, this is quite linear combination, c1, phi1 plus c2, phi2, if you let it act on a linear combination in the second slot, replace that by a linear combination, then you can spend it out just by linearity. So this becomes c1, g acting on psi, comma, phi1 plus c2, g acting on psi, comma, phi2, like this, that's the linearity. 1v is the second property, which is that it's anti-linear in the first operand. And I'll tell you now what anti-linear means is that if I replace the first operand by a linear combination less than only the second one alone, then we have g of, let's say, c1 psi1 plus c2 psi2, comma, phi. What is that equal to? Well, it's equal to c1 complex conjugate times g acting on psi 1, comma, phi plus c2 complex conjugate g acting on psi 2, comma, phi, like that. The term anti-linear, which we'll encounter once in a while in this course, means that when you apply something to a linear combination of what we have expansion with complex numbers as coefficients, then when you expand it out, you have to take the complex conjugates of the coefficients. Anyway, g is linear in the second operand of any linear in the first. So those are two properties of it. Let me try to see if I can put the other properties over here. There is a second property, which is called the symmetry property of g. And the symmetry property says that if I take the g of psi phi, a pair of cats, the answer is g of phi psi in the reverse order, which is a complex number, but I have to complex conjugate. This is the symmetry property. And the third property is called the positive definite property. And it says this, is that if I take g and let it act on two cats, which are identical, the same cat in both slots, well, first of all, before I write anything out, notice that by the property number 2, the answer has to be real, because it's equal to its own complex conjugate. So this is a real number. And in fact, it turns out it's always a positive number, so it's equal to 0. This is for all psi, for all states psi in the Hilbert space. And it equals 0 if and only if the same psi is equal to 0. It vanishes the g value of its metric, but the g with the same vector inserted in both slots is equal to 0 if and only if the vector is 0. All right, so those are the three properties of the metric. The metric in a real vector space is a way of doing dot products of vectors. And in particular, the metric to take the scalar product of the vector with itself, you get the square of the length. That's how it is in a real vector space with the Euclidean metric, which satisfied the real n log of these three properties. So roughly speaking, this g of psi psi here, which is a positive number, is interpreted also in quantum mechanics as the length of the cat's psi. Length, quote unquote, it's a geometrical analogy or metaphor, I think we've got the geometry of these complex spaces. But in fact, we're going to develop this cat formalism, and then later we'll derive wave functions from it, not treating wave functions as a starting point. Yes, all right. Now, next I need to tell you about something which in doctorized focus is called the dual correspondence. Actually, it's a fancy language for commission conjugation, which you know about already, but I'll tell you how it works. The idea is this, let's consider the metric acting on two cats. This is, like I say, in the interpretation of the scalar product of the two cats. Let's consider this metric acting on two cats. And let's regard the first one as being fixed, and the second one as being variable. So then this can be regarded as an effective function of just a single cat and a five cat. And moreover, it's linear in a five cat because that's the second operand, and it is a complex value. So this is a complex value linear operator acting on cat's psi. And so by our definition of a bra, it's actually a bra vector, or a bra acting on five. So this is some bra acting on five by the definition of a bra. And what is the bra? Well, the bra is it is a linear operator, but clearly it's linear operator. It's associated with the psi here that we fixed. So what we do is we write this this way. This is definition now. We write this as a definition of psi pi. But let's just say this is the definition of bra psi. We started given a cat's psi. And this is associated with a bra psi. So what this formula does is it defines a bra acting mapping that takes us from a cat's psi. This is called what soccer I call this is the dual correspondence. It takes us over to a bra psi. The definition of a dual correspondence is this formula. But in fact, after a while, we'll stop using a G notation when we end up just writing it this way. It means the same thing. All right. Instead of using DC, there's a more common notation, which is to use the dagger notation. That's to say the answer, which is the bra here. And we'll say that this is the cat's psi with a dagger on it. The dagger is submission conjugation, which is here being defined in its action on ket vectors. And what it does is it converts ket vectors into bra vectors, which are defined by the formula top. And this dual correspondence, if you like, you see is a mapping that takes you from the cat's space over to the dual space, the star. Did I mention that before that the dual space, each star is the space of bras. So naturally, the cat's the bras. If you're working on a space of finite dimensions and you have these postulates of the metric G, it's easy to show that the dual correspondence is invertible. The dimension of the bras space is the same as that of the ket space. And moreover, every bra, which is created in this bra, in fact, is the image of some ket under this dagger operation. So the cats and the bras are placed into one-to-one correspondence. And therefore, it has an inverse so that you can do a DC inverse notation from bras back to ket. And this inverse is traditionally denoted also by the same dagger notation. So what we get is this, as we can say that if I have a ket side and there's a bra side, and I get by applying the inverse of the dual correspondence, I just use the same dagger notation for it. This is really the same operator. It's really the inverse of it. But the notation doesn't cause me trouble because if you square it, you get the identity. You see it's a consequence of this that if I take side and dagger in twice, I'm just going to get the side dagger in. Well, in effect, we've defined the dagger notation now of both kets and bras, and it just amounts to switching one back and forth between the other. The dual correspondence is anti-linear. If I take a linear combination of kets and I form the dagger of it, what I get is a c1 star bra psi 1 plus c2 star bra psi 2. This follows just from the fact that if the g operator is anti-linear in the first operand, that's what makes it a linear operation of the dual correspondence. The consequence of this structure that I've developed so far is the Schwarz inequality, which I'll tell you about now. The Schwarz inequality is important in quantum mechanics because it leads to the Heisenberg uncertainty relations, as we'll see a little later. If you apply the Schwarz inequality on a real vector space, you'll find that it's equivalent to the statement that the shortest distance between two points is a straight line. And in some sense, even in complex vector spaces, you can think of that as being the geometrical meaning of Schwarz inequality. But in the case, here's how it's stated. The Schwarz inequality says that if you take the scalar product of a phi with a psi that's a complex number, and take its absolute value squared, this is less than or equal to the scalar product of phi with itself times psi with itself. Both those numbers in the right-hand side are real and positive. So their product is real and positive. And the statement is that the scalar product is less than the product of these squares of the two vectors. We can think of psi psi as being the square of the vector psi real and non-negative number. So I won't use the Schwarz inequality yet, but at least let me prove it. It works like this. Oh, and by the way, there's an addition to this, too, which is that we get equals, instead of an inequality, it turns into an equal sign. We look at only if the kets psi and phi are linearly dependent. So that's the Schwarz inequality. The proof which I'll outline quickly on something like this is let's define a ket alpha, which is equal to psi, plus a complex number lambda times ket phi. We'll take a linear combination of the two kets, in other words. And we form the scalar product of alpha with itself, which, as you can see, is going to be psi scalar product of psi, plus lambda times psi scalar product of phi, plus lambda complex conjugated times phi scalar product of psi, plus the absolute value of lambda squared times phi scalar product with phi itself. Like this. And because this is the scalar product of a ket with itself, the result has to be greater than or equal to 0 by postulate number 2. In fact, I can't read it here. It's number 3, isn't it? Number 3, yes. Postulate number 3, this is greater than or equal to 0. Now, lambda is any complex number here. And it turns out, if you play with this for a while, that an interesting choice is to let lambda be equal to minus the scalar product of phi with psi divided by the scalar product of phi with itself. And I'll let you do the algebra. If you plug this in to these four terms here, what you find is that the last three terms are all equal except the middle two ones have got a minus sign on them. So it goes minus 1, minus 1, plus 1. And the result is this, is that alpha scalar product of alpha is equal to scalar product of psi with itself, minus the absolute value of the y-scalar product of psi squared divided by y-scalar product of phi. This is greater than or equal to 0. And just by multiplying through by this positive number here, you can see you get the short, second quality of the first form. Because a little extra work to do to show the second form that you get an equal sign if and only if you're linearly dependent on what you refer to the notes to see how that works out. So that's the result that we put here for this. It's something we can derive in this point. Now, next, let me move on and say something about linear operators. And I would like to call this, I would even access this back board. I would get both of the other. That's the reason I can't use that third board. I'm not supposed to be able to raise two of them at once, but apparently you have one of those in a very close up. OK, well, I'll just have to alternate back and forth. Next, I want to say something about linear operators in farm mechanics. Here, I'm referring to the kinds of things like energy momentum and Hamiltonian which we're familiar with. So the linear operator, as I'm mapping, it takes us from the cat space to itself. And it's supposed to be linear so that it has the obvious distributed law over linear combinations of cat vectors. It's also possible to talk about anti-linear operators where you need to take a complex conjugation of complex conjugative coefficients when you have linear combinations. I'm not going to say much about anti-linear operators now because most of the operators we deal with are linear. There's one important anti-linear operator in farm mechanics, which is time reversal. And we'll cover that later in the course. But don't worry about the anti-linear operators when we get to them. And so for now, let's just talk about linear operators which covers most of the cases, Hamiltonians, et cetera, or linear operators. Now, one of the things to say about linear operators, if you've got one, is that linear operator acts on cats. That's where they begin life. Can we also talk about linear operators acting on broths? Is this possible? I mean, if we do it, it isn't really the same operator. But I'll call it that anyway. Is it, can we define this? And the answer is yes, it works like this. So the idea is that if I have L acting on the side, and this is defined because L is a linear operator acting on the cats. And what I'd like to do now is to say, what is this? Am I let the linear operator act on a broth? What's the meaning of that? Well, if the linear operator acts on a broth, it's supposed to produce another broth. So this thing, psi L, is a broth. And therefore, it's a linear map that takes cats into complex numbers. And so therefore, I should be able to let this thing act on a cat by, and I should get a complex number. And in fact, I can define this broth by just saying what the complex number is and verify that it's in fact linear. That's the things I need to do. Well, here's a simple answer. The complex number is going to be this. It's going to be the broth psi acting on the cat L5. Because remember, we know what L does to cats. So this equation, in effect, becomes the definition of psi L. So I'll make this three little triple equal sign here because that means definition. Here it doesn't mean that here. So this is a definition of psi acting on psi L, or no acting on psi. As we say, it acts from the right onto a broth. And here it acts from the left onto a cat. Now, one of the consequences of this definition is that parentheses don't matter. So it's customary to drop them. And just to write this this way as psi L5. But lying behind me, this is the understanding that you can let L act either to the right or to the left. And it won't make any difference in the answer. But this is what you call a matrix element of the operator L, linear operator L. Tell you now about the outer product. Let's suppose we're given two cats which I'll call alpha and beta. You don't have to use Greek in these cats, but I have Greek labels that I'm doing that mostly here. Suppose I'm given two cats alpha and beta. Then I want to define a linear operator, which is called the outer product of alpha and beta, which is written this way, as alpha beta. It looks like a cat juxtaposed with a broth. So this is a linear operator. It's supposed to be a linear operator. And it's called the outer product. Well, I already said that, so let's say it again. It's called the outer product of cats alpha and beta. And how do we define it? Well, it's a linear operator. It means we can muddle an act on any cat. And we should get another cat. And what will the answer be? It will be this. It will be cat alpha times the same product of beta with psi. In other words, you can say the definition of the outer product amounts to just reading these three symbols, juxtaposed, with a different ordering of parentheses. You do it like this, or you do it like this. On the right-hand side, you see this now becomes a complex number of multiplying times a cat. On the left-hand side, it's an operator acting with psi. This defines an operator. It's a definition of it. So that's the outer product. The other product is essentially the same as what people call the tensor product or dyadic product if you ever study tensor analysis. That's what they call it. Now I want to say a little bit about basis vectors. You know from linear algebra that a basis is a set of vectors that expand the space. And that means that they're, well, they expand the space and they're also linearly independent. So they're linearly independent vectors that expand the space. So in a cat space, let's suppose it was implicit if we talk about a discrete basis. So we've got a set of 10 vectors I'll call here. And we're going to index by 1, 2, 3, something like this. And the number of such vectors is the dimensionality of the space, which could be infinite because cat spaces are oftentimes infinite dimensional. The basis may or may not be orthonormal. Basis don't have to be orthonormal. Usually they're not always. In quantum mechanics, we use orthonormal basis. So we said that basis is orthonormal, as I'm sure you know, that the scalar product of a pair of basis vectors is a product or delta like this. They're the most useful basis in quantum mechanics. So let's talk about those orthonormal basis. Then if I have an arbitrary cat, or as we say, state, psi, because this is a basis, that means I can expand this as a linear combination with some coefficients of the basis cats. And by orthonormality relation, the problems immediately, its expansion coefficients are just given by the scalar product of the basis cats with the original vector psi like this. I'm sure you're familiar with this sort of thing. Now in this expansion, I've got a complex number multiplied times a cat. When we write that, we can also put the complex number in the other side. It doesn't matter which side you put the complex number on when you multiply a complex number times a cat. But I'll put it over here on the other side because now what I want to do is take the expansion coefficient and plug it in here. And if you do that, then you get the sum of n of n cat n bra psi like this. So red one way, this is cat n times a complex number. But we can read it another way, which is outer product of n n acting on cat psi like this. In fact, let me move the parentheses like this because psi itself doesn't depend on n. And so what you see is that there's an operator which is in these parentheses here. And when it acts on an arbitrary cat, it reproduces the same cat all over again. Therefore, that operator must be the identity. So we say, we write it this way, is that the sum of the outer product of the basis cats with themselves is equal to 1, where 1 stands for the identity operator. It's common in quantum mechanics to write the identity operator as just 1. And this is called resolution of the identity, which is associated with the basis here, assumed to be a discrete basis. Later on, we'll generalize this in the case of continuous basis. But this is what it looks like. So this is an example of where the outer product appears in the resolution of the identity. Now, another topic. I've previously defined the dagger operation on cats, which maps them in the bras, and the dagger operation on bras, which maps them in the cats. Dagger operation is permission and conjugation. It's convenient to extend the definition of the dagger operation in complex numbers, in which case it's just interpreted as complex, same thing as complex conjugation. Now I want to extend this even further to define the complex, define the permission conjugate of an operator. Here's the idea. If I have a linear operator a, this is a linear mapping of the cat space into itself, I want to define the permission conjugate of a, which will be another linear operator on the same cat space. And the question is how to define this. And the answer is the following. It is that we say a dagger acting on psi equal to the first taken cat psi converted into a bra using the dual correspondence. You then let the operator a act on that. We know how linear operators act on bras. And then we take the whole thing and we finish with conjugated again. This is the definition of a dagger. Now, the question that arises is that if a is linear, is a dagger also linear? And the answer is yes. Because you see, when I first take psi and then I convert it into a bra by the dual correspondence, that's actually any linear operation. And then I'm going to let a act on the bra. But then our mission conjugated a second time, which is another any linear operation. And so the result turns into a linear operator. So a dagger is linear, linear operator. All right, so this is the definition of a dagger. There's a whole bunch of simple consequences of this definition, which I'll just do this down here. Let's see. So one of them is, is that if I take a and dagger it twice, I get a back all over again. That certainly implies in these other instances of the mission conjugate operation. Another property is that if you take a product of two operators and you dagger them, you get the dagger operators in reverse order. You have to reverse it. The third property is that if I take a matrix element of an operator a between two states, psi and phi, this is equal to, well, perhaps I'll write this one. Let's take the complex conjugate of that matrix element. This is equal to the original matrix element that's read in the reverse order where everything is daggered as you move through it. So we start with the right, I had phi, we dagger it, turn it into a bra. The operator a gets turned into a dagger. And that's psi gets turned into, excuse me, bra's like it's turned into that just reverse order. Here's another consequence of it. If I take the outer product of two cats, alpha and beta, and I dagger that, this is a linear operator, so I'll dagger it. What this turns into is the outer product of two cats in the reverse order. These are all easy consequences of the definition. Now, one result of these definitions is a general rule in quantum mechanics whenever you're dealing with cats, bras, linear operators, complex numbers all multiply together. The kind of multiplication can be ordinary multiplication. It can be operator's acting either to the right or the left or it can be the tensor product of two cats. All it has to be is just a meaningful expression. The rule is that if you want to take your mission conjugate of that, you read the expression backwards and you apply dagger to each item separately. Complex numbers go into their complex conjugate and so on. In fact, there's a couple of examples of that rule right here. You see this has been read backwards. This stars complex conjugations of the whole result, which is the complex number of the matrix element. Here, you're reading it backwards and daggering things. Reading it backwards and daggering. The rule applies quite generally. This is all part of the neatness or the advantage of the direct notation for calculations of quantum mechanics is that it makes all these operations quite natural in the notation. Now, as I'm sure you've done a way to make another definition, I'm sure you know that permission operators A is one which is equal to its own permission conjugate, definition of a permission operator. Permission operators are related to operators what are called self-adjoint operators. If you're a mathematician, you go to great troubles to draw the distinction between these and to be aware of when one is the same and when the two are the same. But for practical applications and quantum mechanics, we can consider self-adjoint and permission as being essentially synonymous. If this worries you about it, you need to go study it. Of course, some functional analysis, I'll just put it that way. Actually, in finite dimensions, in a finite dimensional space, two-term submission and self-adjoint are equivalent. The only issue arises in infinite dimensional spaces. And I don't really want to get into the mathematics of technicalities of infinite dimensional spaces very much in this course. But in any case, for most practical purposes, we can regard these two terms as synonymous. Now, if you have permission operator, permission operators, of course, I'm sure you know this, are particularly important because they represent physical observables who gradually say more about why this is true as we go on. Right now, let me just say one more thing here. Yeah, let me just say one more thing here. Because if we say that a permission operator is positive definite, if we write this out, a permission operator is said to be positive definite if you have this following property, is that if you form the matrix element of your operator A to the same state on both sides, the answer is always non-negative. And furthermore, if you get equals to 0, if and only if the same side vanishes. This is a definition of positive definite. A positive definite operator, from another point of view, is an operator whose eigenvalues are all positive. One can also speak of a non-negative definite operator, and that's one whose eigenvalues may include 0s, but they don't include any negative numbers. And I haven't said anything about eigenvalues yet, but I will, I guess, in the next lecture, we'll talk about the eigenvalues and the spectrum of operators and things of that sort. OK, that's all for today.