 And I introduce you the concept of density matrix. So how can you describe a state when you have only a partial knowledge of the system? And indeed, we are seeing that we can describe the system using a pure state, a quantum state like this. Because sometimes there are situations where you only know that there is a given probability that the system is in a given state. For example, if the probability to be in a state is equal to Pi, then you have to describe the system using this operator called density matrix, which has this form. Using this operator, you can compute the expectation values of the observables in this way. So you should compute the trace of the density matrix times the operator. And well, in particular, if you're interested in the probability of the outcome of the measurement, then this is what you obtain. You see that the probability of a given outcome lambda can be written as the special value of the projector on that particular eigenspace corresponding to that particular eigenvalue. This was just an introduction on the density matrix, formalism. Then we talk a bit about the correspondence principle in statistical physics. And I told you that the idea is to replace the classical density of states, or classical, in the phase space, UMP. These are conjugate variables. And this is the measure in the phase space. So you replace this probability with the density matrix in quantum mechanics. In particular, if you apply this kind of prescription in equilibrium temperature T, what you find is that you, instead of working with this classical density, which is well known, keeps distribution. It is minus beta H beta, the energy divided by the temperature. You have to replace this classical probability by this density matrix, which has the same form as you can see. And now you have replaced the energy by the Hamiltonian, as we always do. OK, so then I also introduce what I just told you. In the end, something about the entropy of the state. We know the definition of the entropy in the classical physics. We know that it measures the number of microstates with a given thermodynamic properties, in particular with a given temperature, for example, or given total energy. And so the classical expression, can I, can I raise you? So the classical expression is something like this for the entropy is equal to the integral over the phase space of the distribution, plus the distribution, q d times the logarithm of q d. Over here, we should put something. Let's put divided by some constant, because this is not a dimensional. So inside the logarithm, it should be something which is a dimensional. And the fact that you have to put some constant here means that the entropy in a classical statistical physics is defined up to a constant, an additive constant. Now what happens when you apply this correspondence principle? Yes? The other row is classical CL. Yes? Yes, I'll try it a little bit. OK. So this is the entropy in the classical case. So using the correspondence principle, then we can write down the entropy. In which ensemble? Well, this is a general definition of the entropy. Given a distribution, a density of state, row classical, then you can define this kind of entropy. And the meaning essentially is that you are counting this kind of microstates. So in the quantum case, then the entropy will be defined as minus the trace of row, logarithm of row. Row is the density matrix. We will know that if we consider a system at sufficiently high temperature, then we can neglect the quantum effects. So in particular, you should expect that if we now write the density matrix of a thermostat like this, and we consider the temperature sufficiently high, then we essentially find a classical entropy up to a constant, an additive constant. This is the one. Because we live in a classical world. So this should be true. But in order to see quantum effects, so we have to go to low temperature. In particular, we can expect that the maximum effects will be present at zero temperature. Zero temperature, where you don't have any more classical, sorry, thermostatation, zero temperature. So if you imagine a classical state at zero temperature, you imagine something is frozen. So nothing moves. So there is no correlation between the parts. So everything stopped. No correlation. But now because we live in a quantum world at zero temperature, so we must take into account quantum fluctuations. And so this means that we should expect some peculiar behavior when we are at zero temperature. Now first of all, what happens if we compute the entropy of the state at zero temperature? With this definition of entropy. Now if we assume that when we lower the temperature, we approach a pure state, just one state, then it's easy to show that this is equal to zero. Why this is equal to zero? Because for a pure state, we know that the eigenvalues of the density matrix are either zero or one. Now if the eigenvalue is equal to zero, its contribution is equal to zero because of this term, zero times something. This is divergent, but the divergence of the logarithm is much milder than the zero of the problem. So if you plot x log x as a function of x, what you find that this function is equal to zero, the x equals to zero. And moreover, if you consider the contribution from the eigenvalue equal to one, then this expression is equal to zero because of the logarithm of one, equal to zero. So you find that if you have a pure state, then the entropy is equal to zero, the entropy of the total state. This is the third principle of thermodynamics. You have zero entropy. You can imagine that there can be exceptions to this behavior because I told you this happens when you approach a pure state. That's your temperature. Maybe in some physical system, you can have some degeneracy of the state. So if you have degeneracy, well, this doesn't hold anymore. So you could have a finite entropy. But here, for the sake of simplicity, we are just considering cases where you approach just a single state. So the entropy will be equal to zero. The degeneracy means you're watching the next state. What I mean is that if you plot the spectrum of the amytonian, generally, you have something like this. Now, if you have just one ground state here, then when you lower the temperature, then you will end up in the ground state. But if you have two states here, the same energy, then you can end up in a superposition, either to an incoherent superposition or not. It depends on the system. Because, well, as we will see, if there is some symmetry in the system and the symmetry is broken at zero temperature, then even if you have degeneracy, then you will end up in just one pure state. So this depends on the system that you are considering. But for the sake of simplicity, we consider cases where at zero temperature, we have just a pure state, total zero entropy. OK, fine. But it's OK. We don't get much information from this entropy, I'd say. Nevertheless, we know that at zero temperature, we have only quantum effects. So everything that we can compute should show some kind of quantum correlations in the system, to give us some information about the quantumness of the system. So without that, we cannot get anything from here, from s equals 0. But in B, we can get something considering reduced density matrices. So let's assume that we are in this pure state. This is our pure state described by a given quantum state psi. And now let's just consider a bipartition of a state. We divide the state in two. We call this A and we call this B. Now yesterday, we are seeing how to describe the observables in A. So let's just consider this kind of reduced density matrix. So let's say we are not interested in what's happening in B. And we want to construct the density matrix of A. We know how to do it. And we know that rho A is equal to the partial trace over B of the original density matrix, which was just the projected on the side. It's equal to A over B of psi. Yes, yes. For example, it can be, if this is a spin chain, you could say that the first N spins are in A and the remaining spin are in B. Or if you have a system of particles, you could say, well, I consider just one particle on one side and the rest of the particle are in B. They are? Yes. So the total space is A plus B. And so the idea is to just, well, because we don't get any useful information from the total state, let's see what we can get from produced density matrix, which we are sure will be something quantum, because we are at zero temperature. And now, but before I consider quantum case, let's think about classically. So what we could expect if we compute something like this or a reduced density matrix in the classical state case. So everything is frozen. When I compute the density, the reduced density matrix means that I am neglecting what's happening in B. But in the classical case, I don't care what's happening outside my system because there is no correlation. So if this was classical, then I would have expected any difference from this result. Because I'm just tracing out some degrees of freedom that are completely relevant for my physics. So in the classical case, you could expect that the entropy is zero also in the subsystem, because there is no correlation. So I'm telling you this just to make you think that maybe if we find some non-zero result for the entropy of the reduced density matrix, this will be a pure quantum effect. So if the system is somehow classical, then you will find still zero. Zero would mean classical. If instead we find some non-zero value for this entropy, then this means that there is some quanteness in our state. So well, we have seen yesterday that the meaning of this expression. And we have seen that generally this density matrix has eigenvalues different from zero and one. So generally, we know that this would be different from zero, this expression. How is this called? The entropy of A of the subsystem is called entropy of entanglement, or bipartite entanglement entanglement. So this is entropy of entanglement, bipartite. Yes, I will write. OK, so this is not the right line. So is this OK? OK, I'll try to write. OK. So the entropy of entanglement or entanglement entanglement. So this is the finisher. And well, let's see some properties of this entanglement entanglement entanglement. In particular, I define the entanglement entanglement for the subsystem A. What happens if we compute the entanglement entanglement for the subsystem B? We have a bipartition. So it doesn't matter if you compute one or the other. I'm sorry. The answer is no. It doesn't matter. There is a very nice result that shows that the entropy, because it is bipartition, the entropy of A is equal to the entropy of B. Actually, the result is even stronger, because the result is that if you consider the reduced density matrix of A and you compute all the eigenvalues of density matrix, which we call lambda IA. And you compute the eigenvalues of rho B, which we call lambda I of B. Then you can show that all the non-zero eigenvalues of rho A are equal to the non-zero eigenvalues of rho B. This is not difficult to show. If you are interested, we can quickly show why this holds. This is the corollarium of the Schmitt decomposition of a state. So yes, exactly. What is important here for the theorem is that the total system should be in a pure state. This is fundamental. Given that, you can consider any bipartition and you will find this result. Let's see why is this a case. No, there is not. This is one difference between the classical entropy and this entropy, because classical, you expect that the entropy of the union of subsystem is equal to the sum of the entropy. It's extensive. Now when we go to temperature equal to 0, we find this kind of a, well, this is correct. But what happens when you consider the thermodynamic limit? Zero temperature, the entropy becomes zero. So you don't have any more of this kind of extensive area. You remove the extensive part. So at any, if you consider finite temperature, this extensitivity remains this property. OK, you must be a bit careful, because when you consider the extensitivity of the entropy, generally, you assume that the subsystem are much smaller than the total system. There is this assumption that you consider large subsystem and you consider the union of large subsystem, but the system is always much larger. So in this limit, you find that the entropy is higher. If you just consider half, I'm not sure, 100% that this is correct. That it remains extensive. OK, this is not important. Now we are at zero temperature, so in the total, the entropy of the state is zero. And we are now wondering what is the entropy of the part. And OK, let's prove that these two eigenvalues are equal. These two sets of eigenvalues are equal up to the zero eigenvalues. And the idea is to write the initial state just in the basis of our system. And we choose a base which is a completely general base and m with some coefficient and m. And we choose a basis which consists of the tensor product of the basis of the two spaces. So what I mean is that we choose some basis of the space A, just some vectors by the A. This is a base in A. We choose some vectors by mb, which is a complete basis in B. This is our normal basis in that space. So a base for the entire space is just the tensor product of the two. So these are the elements of the base of the entire space, A union B. And then I'm just saying that the pure state psi is given by a generic superposition of the elements of the base. I didn't draw anything very deep. OK, so well, now this index is run from 1, the index n, run from 1 to the dimension of the space A. The other index, run from 1 to the dimension of the rest of the space. So this is a rectangular matrix, c and m. Just to have an idea, let's assume that the nA is smaller than nB. So if we are smaller or equal to, then if you plot the matrix c, this will be something like this. So the number of rows is smaller than the number of rows. This is the matrix c. Now there is a nice result, which is called a singular value decomposition of any rectangular matrix, which is a generalization of the standard spectral theorem for, let's say, the standard of eigenvalue. Yeah, OK. And the theorem states that if you have a generic complex rectangular matrix, then you can always rewrite this matrix as a unitary matrix times something which is a rectangular diagonal matrix. Now I tell you what I mean. And then here you have another unitary matrix. OK. So what is this? This is the matrix D. It's a diagonal matrix in the sense of a rectangular matrix in the sense that the all-in-one zero eigenvalues are along the diagonal. Yeah? And then you have all zeroes, all the other entries of the matrix. The same if the dimension are aversive. So if nA is larger than nB, then you have this picture. These are 0, 0, 0 here. And here you have something different from 0. This is the matrix D. And these are just unitary matrices. Unitary matrix means that U, U-dark, is equal to identity. Sorry, that's right. U U-dark equal U-dark, U equal to identity. V V-dark equal V-dark, V equal to identity. This is a theorem that you can prove. I am not going to prove it. So but this is just a result. For any matrix, you can always decompose the matrix in this form. So unitary times is kind of diagonal, rectangular matrix, time other unitary matrix. Now let's see the consequences of this mathematical theorem. So we apply the theorem to our matrix C. Thank you. And let's write in terms of indices of the matrix C. So we have C nm is equal to U-d-dark nm. And this equals sum over L. It goes from 1 to the smaller dimension between nA and nB of this coefficient here, where we can call lambda i. The matrix elements on the diagonal of the matrix D. So we have here lambda l. Sorry, you have U nl lambda l. Then you have V-dark lm. This is our matrix C. Yes? We will see in a moment. So far, they are just numbers. They correspond to this decomposition. They are just the numbers on the diagonal of the matrix D. So this is what we find. Now let's write our pure state using this decomposition. So we have a psi is equal to the sum m goes from 1 to nA, sum this is m. This is m from 1 to nB. Then we have the sum L from 1 to the minimum between nA and nB of U. This minimum is the rate of effect that they want. This is a rectangular matrix. So the number of elements cannot be larger than the minimum size of the matrix, the minimum between the number of rows and the number of columns. So you have U nl lambda l V-dark lm. And then we have our state phi nA phi mB. OK. So now we can change the basis of our state. We can define a new vectors of this form. We define a new vector of psi lA, which are equal to sum over n from 1 to nA of U nl phi nA. Because this is a unitary matrix, then this is normalized, the vector here. Then you have psi LB, which is equal to sum from m goes to 1 to nB of V-dark lm phi B m. So we define we are changing basis on space. And so what's the result? The result is that our state psi can be written as a sum over L. It goes from 1 to the minimal dimension between nA and nB. You have here some numbers. Yes? Sorry. I'm trying to cross your derivation to the first general state. This one? OK. This is OK? Yes. OK. If this is OK, now we are using this. And I'm replacing C by this expression. So now we have psi, which is equal if you use this new definition of the new vectors of the basis. This is lambda l times, sorry, there is psi LA, psi LB. This is called Schmidt decomposition. I hope that's the correct one. So you should use the student property, obviously. You know you just need properties on the total state that should be pure. Then C can be generic here, where we start from a normalized state. And you have always kind of see what is the rectangular matrix. It cannot be a machine. Because the dimension can be different. No, this is for completely important. This holds for any complex matrix. This is the composition, singular value of the composition. And so you can apply. It's a powerful result. So it's completely independent of the state. And you find this nice result. Why is this nice? Because now, well, this new basis is an orthonormal basis because of the unitary conditions. So now if we, from the state, we compute the reduced density matrix of the subsystem A. What do we find? So if rho A, by definition, is equal to the trace over B of psi psi. So this is equal to the trace over B using the expression of sum over L from 1 to the minimum between N A and N B of lambda L over psi L A psi L B, that sum also over L and N. And here we have lambda N psi N of A psi N of B. We have to compute the trace of this. This is the trace only over the space B. So it means that we can just consider this matrices. Yeah? We, yes? Yeah, OK, I will do now. So the definition of the trace is the sum over a complete basis of the space B, which we already know. It's this psi L B, maybe extended with other elements. So the trace is sum from L that goes from 1 to N B of sum cap state, which are C psi B L of this expression here. Oh, no L. I should change it. So let's write J from 1 to N B psi J. Then here we have sum over L N from 1 to the minimum between N A and N B of lambda L lambda N psi L A psi L B psi N A psi N B. And here you have psi J B. Yes? Yeah? C? OK, C was just the coefficient of the expansion of the original state. So we have psi, the original pure state. We wrote it as a sum over N and M of this C and M, just coefficient, generic coefficient, expanding the basis phi N A phi N B. So the definition of C is just the coefficient of the expansion. We chose a basis, and we said, look at this state as this given coefficient, which I call C and M. D is the rectangular matrix. So is this thing. So here you have always 0. Here it's 0, 0, 0. Everything is 0. But here you have lambda 1, lambda. And here you have the minimum between the number of rows and the number of columns. Because it's rectangular, so you cannot have more. It's truncated, so you have to truncate. It's not a real diagonal matrix because it's a diagonal matrix truncated up to a given minimum size of the vector H. So from this expression, but now we use that the basis psi L B is an orthonormal basis. So when we consider this scar product between psi J B and psi L B, this is equal to delta, chronicle delta L J. And the same here between psi N B and psi J B. So this means that in the end we find that it's equal to sum J from 1 to N B. Sorry, the sum command that goes to 1 to the minimum and A and B of lambda L. Sorry, there is a conjugate somewhere here. Because this is the branch, so it's the adjoint of psi. So I have a conjugate there. So I have psi L squared times psi L A psi L A. This is what we found. These are orthonormal eigenvectors in A. So this is just the spectral decomposition of rho A. So we realize now that the absolute value squared of this coefficient lambda lambda on the diagonal are the eigenvalues of the density matrix rho A. We can do the same for B. And we obtain exactly the same result. Because now we have to trace over the state of psi B. We have exactly the same chronicle delta. Now we will have delta L, yes, again, delta L J, delta N J. So again the same result for B, by construction. I diagonalize the matrix. These are projectors. These are projectors on the other space. So these are just the eigenvalues. And you obtain exactly the same result for rho B. So for rho B you obtain sum L goes from 1 to minimum between A and A and B of lambda L squared psi L B psi L B. So you see the eigenstates are different. In one case you have A and in another case you have B. I think the values are the same. Now the entropy, OK, from this you see that all the eigenvalues of the density matrix in the subsystem A are equal to the eigenvalues of the density matrix on the subsystem B, except for the zero eigenvalues. And now for the, if you compute the entropy, the entropy, the definition is, sorry, no. So the entropy of A is equal to minus praise rho A, logarithm of rho A. If you use the spectral decomposition of rho A, this is equal to the sum over all the eigenvalues of A of the eigenvalues of the reduced density matrix, which now we can call the lambda squared. Let's use the same notation. Lambda squared L, logarithm of lambda L squared. You have this identity and you find the same results both for rho A and rho B. So the entropy of a part is exactly equal to the entropy of the other part. Independently, how big is the first part with respect to the other part? Actually, it's because, OK, this kind of entropy of reduced density matrix is described in the quantum correlation between one part and the other. So you want this to be symmetric. Because if you are describing a correlation between A and B, you should also describe a correlation between B and A. So it's good that we have this kind of result. Otherwise, our interface would have been wrong. So indeed, if you try to apply the same result at finite temperature, you see that this doesn't hold anymore. And indeed, this entropy is not describing only the quantum correlations, but there are also classical contributions. So it's important that we find the same result for S A and S B. Are you OK now? You're not? Yeah, but the fact is that all the other eigenvalues are equal to 0. So that's true. Yeah, you should put the minimum. But then I can also extend the sum, including also the 0 eigenvalues, because they give 0 contribution. So it's completely equivalent. With this, I introduced the entanglement entropy. And maybe we will talk later in the lectures about some properties of the entanglement tensor when we will use them now. We will see in a few days. But now I would like to start discussing about one to many body systems. So we change the subject now. So if there are questions, ask now. Otherwise, I erase the blackboard. It's a convenient choice. The definition is unique. So it's given by the trace of something, and it's a convenient choice to consider an orthonormal basis. Just for the calculation, there is not this independent of the basis that you choose. So yesterday, we talked about the spin systems of a single spin or two spins. Today, we consider another very simple quantum model, which is the harmonic oscillator. I guess that you all know the harmonic oscillator. You all have seen the solution of the harmonic oscillator. And well, at least you have seen the solution in the position space. So you know that the ground state is a Gaussian. And then you know that the excited states can be written as, what may be, some of you know that can be written as their mid-polynomial times the Gaussian. But OK, today I would like to present a different and alternative way to solve the model, which in a sense can be easier. And we don't have to work with this special function, or mid-polynomial or whatever. But we still are able to solve the harmonic oscillator and to find immediately the spectrum of the problem, of the Hamiltonian. So harmonic, harmonic, OK, maybe this is not my purpose. So we know the Hamiltonian of the system can be written in this form. So you have d squared over 2m plus 1l m omega squared x squared. This is the Hamiltonian of the harmonic oscillator. So what does it describe? Well, this is a simple model, but it's very useful in physics. And the reason is that, well, you can imagine that generally when you want to describe some system under the effect of some potential, then you have always some kinetic part. If your theory is not relativistic, generally it has this form. d squared over 2m, where m is the mass of your particle. And then whatever is the potential, what happens when you assume that you are able to solve the problem? Then you find all the eigenvalues, all the energies of the state. And then you realize that maybe there are some level, energy level, which have energy sufficiently low that you can actually approximate your potential with an harmonic potential. So independently, how complicated is your potential? Generally, there is a regime that's sufficient low energy where you can change the potential. Just consider an harmonic approximation. And you find it's not internal like this. This is why this model is simple, but it's very useful in physics. So it's not just an artificial model. So this is why we started with this, with the harmonic oscillator. OK, so how can we solve this model? And first of all, what is x and p? x is the position, p is the momentum. And we know, because this is a quantum harmonic oscillator, so position momentum satisfy this commutation relation, i h bar, where h is equal to 1. We put h equal to 1, we assume natural units. Are you fine with that? So this is our Hamiltonian. And we want to diagonalize it to find eigenvalues and eigenvectors of this Hamiltonian. So I suggest you to consider some new variables now. Just a suggestion. Let's try to rewrite the Hamilton in terms of these new two variables. Define this operator a in this way, m omega over 2 x plus i over m omega p. This is just the definition. We define this operator. And we define the adjoint of the operator, which is given by square root, then omega over 2 x minus i over m omega p. We can define these operators. And if you want, we can recover the original position momentum operator by inverting these relations. And what do we find? We find that x is equal to 1 over square root of 2m omega times a plus a dagger and p is equal to square root of m omega over 2 times i a dagger minus a. You can check them. I hope that the coefficients are right, I think. This is just the definition. We define this operator. And this base is complete because we can express the original operator in terms of this new one, will be of the a dagger. So in particular, we can rewrite the Hamiltonian in terms of a and a dagger. I mean, we just take this plug, this expression here. And let's see what happens. What do we find? If we do this, we find that the Hamiltonian is equal to omega a dagger a plus 1 of, which is nice, no? Because this is more compact than the original Hamiltonian. But there is another reason why this is nice. And the reason is the following. Let us now apply h to a dagger. What happens if I apply this? Then this is equal to omega a dagger a plus 1 of, apply to a dagger. Now, if we use the commutation, ah, sorry, I forgot to tell you something. Now, x and p satisfy this commutation relation. Now, if you consider a and a dagger, then you immediately find, using this, that a a dagger satisfy this commutation relation. So instead of a b and a, now we have a 1. Now, let's us try to let us apply the Hamiltonian to a dagger. Then try and move this operator on the left of h. So when we move a dagger to the left of a, then we must take into account that there is a commutator. So let's write everything. This is equal to omega times a dagger a a dagger plus a dagger a dagger a dagger a dagger a dagger plus, yes, this is correct, plus we have 1 alpha omega a dagger of this. Now, a dagger a a dagger is equal to 1. So this is equal to omega times a dagger plus, I did something. I did something wrong. Yes, the moment, what I did? A dagger. You know what? Let's do it in a different way. We want to compute the commutator between h and a dagger. Let's do it in this way because it's easier. Apparently. So let's write Hamiltonian. Hamiltonian is omega a dagger a plus 1 alpha a dagger. Now, the commutator between a constant and anything is equal to 0. It's not important. Now, I hope we know there is a very well-known property of commutator that if you are the commutator between a, b, and c, this is equal a commutator between b and c plus commutator between a and c times b. Let's use this property here. So we have that this is equal to omega times a dagger commutator between a and a dagger plus omega commutator between a dagger and a dagger a. This is equal to 1. And this is equal to 0 because the commutator of one operator with itself is equal to 0. So this is omega a dagger. Is it equal to 0? Yeah? No, this is general for generic operators. I mean, it's the alpha. I remove the alpha just because the commutator between a number and an operator if you want, the commutator will be lambda is a number times the operator of minus the operator times the number. So we found these results. Why is this interesting? Because let's now assume that psi e is just the negative state of the Hamiltonian with energy e. So this means let's assume that h psi e equal e psi e. This is the definition. We define this in this way. Now let's apply a dagger. So we can see h apply to a dagger psi e. And let's use this result. So this is equal to the commutator between h and a dagger apply to psi e plus a dagger h apply to psi e. Are you following this? What am I using here? Just to be clear, I don't want to do. I lost you now because it's simple. So the commutator between a and b is equal a b minus b a. So this means that a b is equal to commutator between a and b plus b a. So here I'm just writing this. h a dagger equal to commutator plus a dagger h. But now h applied to psi e is the energy times psi e. And we know that the a in the commutator between h and a dagger is omega a dagger. So here we have omega a dagger psi e plus e, which is the energy, a dagger psi e. In other words, this is equal to e plus omega a dagger psi e. So from this we see that if psi e is an eigenvector of h with eigenvalue e, then a dagger psi e is an eigenvector of h with eigenvalue e plus omega. Now here I wrote a commutator between h and a dagger. We could also write the commutator between h and a. There is a simple way to do this calculation without doing the calculation, which is to write this as the joint, the constant conjugate of a dagger h. So we already know that a dagger h is equal to minus omega a dagger. So we find that this is equal to minus omega a dagger dagger, which is minus omega a. If you want, you can compute it again, but it's just a quick way to use the previous result to obtain the new commutator. So we found that now if you apply this result, if you now consider instead of h a dagger psi e, h a psi e, and we do exactly the same steps, the derivation. So now we use this result of all the commutator. What do we find? We found that the h a psi e is e minus omega a psi e. What's the meaning of this expression? The meaning is that if you start from e, if psi e is a negative state with energy e, then a psi e is a negative state with energy e minus omega. Now let's consider the ground state of the model. How is it defined in the ground state? The ground state is the state with a minimal energy. So this means that there is no state with lower energy of the ground state. But this operator gives us the state with a lower energy, because we could go down with this operator. If we e is the energy of the ground state, we will find something, which is a lower energy. The only possibility is that a applied to the ground state is equal to 0. It should be equal to 0. So we found the property of the ground state. Actually, it's enough, this problem. We found that a applied to the ground state of the model should be equal to 0. Yes, they are multiplied by omega. And they are not a natural number, half integer. Ah, OK. Yeah. Well, mathematically, you can prove that the operator, the Hamiltonian, the amount of oscillators has a ground state. So it's a minimal energy. Then here we are using, because there is a minimal energy state, then it should be that one. We are not doing mathematics here. We just assume that there is a ground state. Generally, what happens is that because we are considering physical Hamiltonian, and in quantum physics, the Hamiltonian has the ground state as a minimal energy state. Otherwise, it's unstable. It's not a physical Hamiltonian. So if you have an Hamiltonian with this problem, then maybe your exam is going to be wrong, not the absolute. So this is the definition of the ground state. A, so a ground state is equal to 0. And if you want, we can solve it. We can solve this equation, because we know the expression for A, which is a linear combination of x and p. So in the end, this is a first order differential equations, not anymore a second order. Just first order. And you are able to solve this equation easily without checking any book. So the ground state is simply defined as the state such that all the A, when you apply the A, they give 0. And for the following, we will use also another notation. Instead of writing saugran state, we will write this as vacuum. We will use this notation. So this means that when you apply A to this vacuum state, then you find 0. Which are the excited states? No, first of all, what is the energy of the ground state? You all know, because you knew before. And now it's really easy, because you may turn it as this form. And because we know that A applied to the ground state is equal to 0, we only have this term. So we find that the energy of the ground state is equal to omega over 2. Everything without introducing gaussians, they are mid-polynormous. And we obtain the energy of the harmonico-sileto. Actually, we obtain all the spectrum of the harmonico-sileto, because we know all the excited states. We just have to apply A-dug. Because every time that we apply A-dug, we increase the energy by omega. So what happens then that now we can say that the generic excited state is proportional to A-dug applied to several times and times. Everything applied to the ground state, to the value. OK, by the way, this is not normalized. But we can easily normalize this operator, if it is a inspector. If you want, do you want to normalize? So if you assume that the ground state is normalized, then we have to impose that this is equal to a constant times this. Let's try to determine the constant, An. Then we have to impose that this is a normalized operator. So 1 equal to this. So this means that this is equal to An squared to the vacuum A to the N, A-dug to the N vacuum. So we have to compute this. OK, this seems to be complicated. No, maybe it's not so complicated. Because what can we do? We can just write this in this form. So vacuum A to the N minus 1, A-dug to the N vacuum. I didn't do anything. I just isolated one A here. And now I rewrite this term as the commutator plus the operator is interchanged. So this is equal to vacuum A to the N minus 1, the commutator between A and A-dug to the N vacuum. This vacuum is like a 5. I should have written 0 anyway. Vacuum plus 5 A-dug to the N minus 1, A-dug to the N. I just rewrote the same expression using a commutator here. But now A here is applied to the ground state. So this term is equal to 0. And we all have this. Do you know how to compute this? I'll tell you the result. OK, yeah, but because there is in general a simple result. You can easily prove that the commutator between A and a generic function of A-dug is equal to F prime of A-dug. Or in our specific case, the commutator between A and A-dug to the N is equal to N A-dug to the N minus 1. This is very simple to prove this. And if you want, I give you the general statement briefly. So let's assume that you have two operators, A and B, such that the commutator between A and B is just a number, lambda. This is a C number. The C number is an order number. If you're interested in the commutator between A and B to the N, then you'll take this result. It's N, NB to the N minus 1 times lambda. How can you do this? You just apply this result N times, and you end up with this result. So here we have the same kind of commutator that showed there. So the result is N times the vacuum A N minus 1, N minus 1, A-dug N minus 1 vacuum. So we found the other. OK, that was just for this part. Let's remove. We found this relation. So the expression value of A to the N, A-dug to the N, is equal to N times this expression value. This is a recurrence equation. So every time you pick a number equal to the number of operators here, so the first one is N, then you have to multiply by N minus 1, and so on, R to 0, so R to 1. So in the end, what you find is that this expression value is not N factorial. It's the product of N times N. The first time you have N, but now you have N minus 1 operators here. So N times N minus 1, and so on. And so you find N factorial. Everything, just to say that this coefficient here, A N, is equal to 1 over square root of N factorial. So now we have also normalized. We have normalized the excited states. So what is nice here is that, indeed, we didn't introduce Hermit polynomial. We already know the spectrum. We didn't know how to write the excited states. Are we happy? OK, I guess that you already knew the spectrum and the excited states of the Hermit oscillator, but this is an alternative way. And it's kind of elegant so far. So there is no. We didn't learn anything more, I would say, with respect to what you already knew before. Just a different way to obtain the same result so far. But now let's consider it's not a more complicated problem. So this was the single oscillator. And we solved this problem by introducing these operators A and A-DUG, which are generally called the ladder operator, A-DUG, and the conjugate. Now the problem that I'm interested in is a chain of harmonic oscillators. So that's complicated, the problem. And this means that now we are considering instead of just a one body system, just one harmonic oscillator, now we have N harmonic oscillator. So we consider many body systems. So we are starting considering the kind of system that I'm interested in this lecture. OK, so let's write it. We have all this point with some spring, if you want. Or maybe I prefer in a different way. I give you a physical description. So let's imagine to have a crystal structure. So somehow you have some ions close to some crystal size. So this is our lattice. Now let's assume that our ions are close to this point. Maybe one can be here, the other can be here, and so on. So the white points are an abstraction, just the points of the lattice. But then the real position of the ions in this one is the green points. Now if we assume that they interact between one another, then if we assume that there is an harmonic interaction, so we are just expanding the potential, again, assuming that the energy is low. So just a harmonic approximation. So you find that there is an effective interaction between the position of these ions with respect to the position of the site, which we will call x. This is x1. This would be x2, and so on. So if you assume that there is an harmonic potential between the two ions, then you end up with the Hamiltonian of these four, h equal sum over all the sides of the lattice. You have the momentum of the ions, and we assume that they have the same mass plus 1 half. Again, you have an harmonic potential, m omega squared, sum over all the sides of the lattice times x xi minus xi plus 1 squared is at the displacement. So x is the displacement with respect to the n side. And this is the kind of Hamiltonian that you find. If you assume that the ions interact by an harmonic potential, you can expand if you want, and you realize then that you can describe the Hamiltonian in this way. And this is also called the harmonic solid in a decent way. But if you are in 1D, so solid harmonic chain. So, well, for the sake of simplicity, we assume periodic boundary conditions, which means that we assume that we are in an integer. Yeah? Yes, indeed, the i is coupled with the i plus 1, i minus 1. So it's a chain of a coupled harmonic oscillator. But I just gave you this picture just to realize that x is not really the position, but it's the displacement. Even you just consider this space in with respect to the site of the lattice. You can interpret it in this way. So in general, you expect that this science will vibrate around this position. And anyway, so this is the Hamiltonian system. And for the sake of simplicity, we assume periodic boundary condition. So it's like to be in a circle in a ring. This is just for simplicity. We would like to solve this Hamiltonian. Here you see that it's much more complicated than a single oscillator, because we are coupled oscillators here. And how can we do this? There are also in this case, you can use your what is called the first quantization perspective. So you can write the wave function in terms of the position of all the ions. And then you apply h to the site and function. You try to diagonalize the problem. This is one standard way to solve this problem. And maybe you already know the solution. If you follow some cursing, condense, matter, whatever, you know that you will end up with finding some oscillation modes. So you end up saying that the eigenfunction of the Hamiltonian correspond to oscillation modes of the ions, which are called phonons. Maybe some of you know this. But OK, this is a way. But now I would like to solve this problem using the same approach that we used before for the single harmonic oscillator. Because in this case, it's really convenient. You see that it becomes very convenient. And what is the idea? The idea is that you see that the Hamiltonian is a quadratic in the momentum and in the position. Again, let's write the algebra of x and p. So we know that x i pj is equal to i del lj, because the variables associated with different ions commute. But you have created a quantum indetermination for the variables associated with the same ion. And then you have the other condition, x l xj equal p l pj equals 0. This is the algebra. Now, you see that the commutator between x and p is a number. So we could apply the theorem that I told you before. So every time that you have a quadratic form, you have a commutator between, for example, x and a function of p. This is, I told you, f prime of p times the commutator, which is i. I told you this before. And we could actually use this just to guess the right form for the operators that you would like to introduce. Because you see that the Hamiltonian is a quadratic form of p and x. Now, when you consider commutator between x and p and this quadratic form, you will find something linear in x and p. So this means that we could define some operator of this form. a dagger equals some linear combination, j from 1 to let's call n the number of sides. We put some coefficients. OK, let's write this. xlj pj plus p lj xj. So these are just coefficients. They could define these kind of operators. And we can guess that when we compute the commutator between h and this kind of operator, al, we will find the linear combination of p and x, just because of that. So the idea is to try to solve any question, which is the commutator between h and a dagger equal to some constant a dagger l. While we want to do this, let's assume that we are able to find this kind of operator with this property. Then we can apply the same chain of identities that we did for the single harmonic oscillator. And we can actually show that if psi e is an eigenvalue of the Hamiltonian, then also a dagger l psi e is an eigenvalue of the Hamiltonian with energy e plus lambda l. So if you are able to find this kind of operator, we can solve actually the problem again. We can try to express all the eigenstates in terms of this operator, like for the single harmonic oscillator. You can always do this because it's an answer. But the physical reasoning is that the commutator between x and p is a number. Because it's a number, it means that every time that I consider a commutator between a linear combination of x and p and our Hamiltonian, which is quadratic, will be linear again. So there is some hope that I can solve this question because this is linear and this is linear. If I had, for example, a term, which is not linear in the Hamiltonian, for example, a term u x 1 to the 4, I couldn't do this. Because if I put this kind of term here, now I know that when I consider the commutator between this and the linear formula like this, I will find something as scale as x cubed. But you cannot write x cubed using these answers. So it's important that the Hamiltonian is quadratic in this operator x and p. It's a very special case. It's an exceptional case. So for this Hamiltonian, we can guess the form of this operator in this very simple way. And we can expect that we will be able to solve this kind of problem. So I guess it's clear why we would like to find this. Let's find that. So what do we want to do? So we want to solve an equation like the commutator between. Let's write the Hamilton sum over i of pi squared over 2m plus 1 of m omega squared sum over i from 1 to n of xi minus xi plus 1 squared. Commutator with that linear combination. Sum j from 1 to n of this xlj pj plus y, the y, lj xj. And we would like to impose that this is equal to this lambda l times sum j from 1 to l lambda xlj pj plus j xj. Because we are imposing periodic boundary condition, it means that xn plus 1 is equivalent to x1. Periodic boundary condition means that the site after the end site is the first site. xn plus 1 and plus 1 is equivalent to x1 is the same thing. So here when you have the term n plus 1, it means 1. We're going to solve this. It seems to be a bit complicated, but it is not as much of a hassle. It's just two lines of calculation. So let's see what we have to do. We have here term the kinetic term. Let's see the kinetic term. So you have pi squared. And we know that the commutator between p in different momenta is equal to 0. So the only contribution here in this expression when you consider commutator between the momentum and the position. So in general, we have to compute the commutator between pi squared and the term like that, which is some coefficient plj xj. OK, this is just a number. So this is plj, the commutator between pi squared and xj. Now this can be written as plj twice pi. And we have the commutator between pi and xj, which is equal to 2p lj pi minus 2i delta ij. This is the imaginary imaginary. So this i here is different from this i here. It's a clear. So this is just the imaginary square root of i minus 2i. This is square root of minus 1 if i. And instead, this is just an index. So we can use this relation to write this contribution from the kinetic energy. Then we'll have to consider this other contribution from the commutator between this term and this term. Because the commutator between the position and position is equal to 0. They commute x and x, but x and p don't commute. So we have to consider this term, which is the commutator between 1lm omega square. And we have a generic term, xi minus xi plus 1 square, commutator between this and xj pj. Again, these are just numbers. So this is 1lm omega square xlj. Now we have the commutator between this and this. Now we have two possible contributions. Because we have to consider when i is equal to j, or i plus 1 is equal to j. So here we have twice xi minus xi plus 1. The commutator between xi minus xi plus 1 and pj, I guess. And so this is equal to 1lm omega square xlj times 2xi minus xi plus 1. And here you have delta ij times i minus i delta i plus 1j. So for this term, you obtain this result. So now we can just collect all the terms. And what do we find? We find the right-hand side, which is OK. We collect all the terms. We find the right-hand side, lambda l times that expression there, which I would like to write in a different way. Lambda l sum over j from 1 to n of xlj pj plus plj xj should be equal to, when we compute the commutator, we find this expression minus i sum over j of pnj pj divided by m plus 1 over 4 i m omega squared sum over i of clj xnj xj, what is j? Sum over j. What is this c? I introduce now c. I define this variable now. So you can do the calculation. This is what you find in the answer. You find that this expression where the matrix clj is this matrix, c equal, OK. Now I don't remember. c equal s all 2s on the diagonal. Then there is a minus 1 here up to e of minus 1. There is minus 1 here, minus 1. There is a minus 1 here, minus 1 here. And everything else, there are 0s. These are the non-zero elements of this matrix. You see, OK, it's a matrix where all the elements are 0, but the main diagonal, which is equal to 2, then there is this diagonal, the next diagonal, which is equal to minus 1, this diagonal minus 1. And then there is the upper right corner, which is equal to minus 1. And the lower bottom corner equal to minus 1. And everything, every other entry, this matrix is equal to 0. Everything else. This is a matrix. No, it's just a result. I just wrote the result in a compact way, introducing this matrix. If you want, it's a yes, is the representation of the Laplacian. We didn't do any calculation. So I don't think that you can optimize in my opinion. This question here. So OK, here, you see P, there is this matrix that I called P. And it's the same P. Here, this is the operator, the momentum operator, Pj. Then you have, ah, here N. Yes, because, ah, why is that? Just a moment. Oh, what is that? You're right. The index is L. Thanks. And then just, there is something. Let me check, because OK, I converted something. I'm not sure what it is. Some of it is L. What is this? And just a moment. Just a moment, because here there is a, ah, sorry, because there is no sum here. OK, I was making a mistake. Oh, no, there is a sum of J1 to N. Then you have P, I call this L, OK. This is J, and this is right. Then I have this, L is L, it should be. This is L, this is C, XL. OK, just a moment, because I did something wrong. The index is wrong here. XL is J, C, J, J, XL, J, and J prime. XL is X, then I have C, J, J prime. And then I have XL, J prime. OK, this is the correct one. There is a sum over J and J prime here. There are two sums, because I forgot the sum. Why? One fourth. One fourth, because I define the Hamiltonian in a different way. Thanks. I wrote the Hamiltonian in the wrong way before, so I wrote like 1 half, it's right. 1 half and omega squared. I wrote this, OK. But OK, it's because, sorry, so I wrote this. I used a different notation, unfortunately. I wrote the Hamiltonian without reading what I was doing there. So the Hamiltonian, let us write this. It was in this form, 1 half and omega squared, sum over i of xi minus xi plus 1 squared. It was like this. Sorry, there is an 8 here. It's just a convention, you see. I can redefine omega in such a way that I have a 2. Just from the convention, but OK, the calculation that I did, I put an 8. Yeah, when I did the calculation, sorry for that. So there are 8 everywhere here. And this is why now you have 1 over 4. It was a typo, OK. L is just, well, I just wrote. There is no L. Also here, it's free. No, no, it's not summed. I'm saying, for given L, there is a lambda L, and then it's not summed, OK. So we find this expression that you can actually obtain. It's just OK, you know. It would have been better to solve it. So with this, and now we want to solve this equation. Now we see that both on the left-hand side and on the right-hand side, you have a linear combination of x and p. Clearly, in order for this expression to be identical, all the coefficient, the coefficient of xj should be equal to the coefficient of xj on the right-hand side, and so on. Why? If you don't see it immediately, you can, for example, commute the commutator between this and pn, or pk, or xk, and then realize that in this way, you can select a single term of the sum. So the result is that in order to be especially to be equal, you have to find the, you have to select each momentum of a particle. You fix the particle, j, you consider it j, and then you look at the coefficient of the momentum of that particle, and you impose that this coefficient should be equal to the coefficient of the right-hand side, the same part, and so on. This is the only solution of this equation. What do you find? What do you find is this equation. You find that pjn is equal to im lambda jxjn. This is one condition that you find imposing that this should be equal to this, sorry, pj. This should be equal to this, I guess. This is what I'm supposed to be equal. Then you have the other condition that I didn't wrote. Then you write. So the other condition is that plj should be equal to 1 over 4 im omega squared. Then you have a sum over j prime from 1 to n of x. OK, that is over cjj prime xlj prime. I forgot the lambda somewhere, lambda. You find these two conditions. Now, OK, I wrote everything in terms of indices. You see the index n, j. But we can write everything in terms of matrices and vectors, which is, I think, cleaner and clearer. And so in particular, at this, you can provide this expression. If you just consider now this expression, now can you write this as p? You define a vector p of n. Oh, this is nj. I think this is nj, I think. So pn is equal to im lambda n xn. OK, now I want to jn, I already put tlj is nj again. This is nj. Where I define this vector, xn, vector is xn with the coordinate x and j. I wrote the same equation here, this equation. And if I write this equation, I find that the lambda l pj is equal to 1 over 4 im omega squared c, x, a bit of confusion with indices, but the final equation should be of this form and this equation. Now you see you can plug one equation to the other. And what do you find? It's the lambda l. Because for fixed l, it's a j, but there is j everywhere. Because sorry, again, the indices, this is the equivalent of j. It was a muti index and I called it. So what I mean is that j corresponds to the operator that we are considering. It's the index of the operator. Because we are assuming that we can solve the equation h, h, a, dot, j, equal lambda j, a, dot, j. So this index j is this index here. She's the same index of lambda. The other index, which now is in the vector, is the other one at the end. So you consider this vector. Then it's possible to attack because of the indices. But I tell you the final equation. So if you plug this equation into the other question, what do you find? You find this c, the matrix c, times applied to the vector xj is equal to 4 lambda j squared over omega square xj. You can check it, do the calculation again, because there can be typos in the indices. But this is what you find in here. And if you know now the xj, then you know also p. So you know all the coefficient of the linear combination of a. And you can reconstruct a. And OK, let's just try to solve this equation. What does it mean, this equation? Exactly. This xj is the eigenvector of c with this particular eigenvalue. This is our matrix, our matrix c. It's very simple. These matrices are called the circulant matrices. The properties that they have, all the elements along a given diagonal are the same. And they are cyclic circles, circulant. And the spectrum of the circulant matrices, but also the eigenvectors, is well known. And it's very simple to show that the eigenvectors have the form of a Fourier transform as you because this is like a periodic matrix. So you can add translation in the invariant, in a sense, matrix. So you can imagine that the solution is something like a Fourier transform. Indeed, you can check that the eigenvector, if you consider an eigenvector of the form, e to the i to pi over nj e to the i to pi j over n times 2 e to the i to pi j over n times 3 and so on up to e to the i to pi j, which is equal to y, anyway. If you consider this kind of vectors and you apply this matrix to these vectors, you find that they are eigenvectors. Why is this? OK, it's late. But I just see why is the case. Just solve the eigenvalue problem for c. So we have to solve c times v is equal lambda v. It's what we have to solve. This means that the CLn vn sum over n should be equal to lambda vn. Now we know the CLn is the elements of a CLn depends only on the difference of the indices, because they are the same along all the diagonals. So this equals sum over n of CL minus n, one we can define, vn equal lambda vn. Then, OK, we can just check it. Let's write vn as I suggested. So let's write vn equal to e to the 2 pi i jn over capital N. Let's consider this kind of answers and see what happens. This becomes sum over n of CL minus n e to the 2 pi i jn over capital N. Now I can shift N by L. This is the symmetry. So this becomes sum over n of c minus n e to the 2 pi i jn plus l over N, which is equal to e to the 2 pi i jl over N times sum over n of c minus n e to the 2 pi i jn over capital N. This is a number, which depends on j. And this is exactly v vl. So just to prove that they are really the eigenvectors. And these are the eigenvalues. OK, I stop. And well, next time we will complete the question and see the consequences of this. Yeah? You just mentioned what is the last step, that you cannot limit the right n minus l as l minus n? No, no, I didn't do that. I just sum l to n. If you sum l, then you find minus n minus n here. I shifted the indices. So I redefined if you want. OK, this is like an M here, if you prefer.