 è troppo vicino a l'uolo, questo è l'uolo, il sistema non è l'uolo, ma è che non c'è staclippo maledetta, prova a parlare? Prova, non sento niente, ma tanto registrano come senti? Io sento, non so se sento perché stanno scappando tutti Matteo? L'effetto? No, è solo la vagna, sì, quindi la luce c'è si sono qui, le luci, sono queste, un po' così. Ok, quindi prima di tutto, hai bisogno di sbagliare questa macchina, quindi solo... Ah, pronto? Ok, fine. Ok, buongiorno, io sono Maurizio e in questi due settimane vorrei presentare alcuni interessanti di non equilibrio dinamico in quanti sistemi statistici. Quindi come puoi vedere, vorrei parlare di meccaniche quanti e non meccaniche quanti di un singolo particolo, ma meccaniche quanti di molti particoli, come puoi sbagliare da qui. E poi ok, quindi questo significa che ok, il primo lezione due o tre sarà solo per le meccaniche quanti, e per le cose che happensi quando si descrivono molti sistemi di meccaniche, quindi cosa puoi fare? E le covereranno solo gli aspetti estetici del problema. Poi ok, i prossimi lezzi saranno considerati la evoluzione del tempo di non equilibrio di questi sistemi. In particolare, oggi saranno considerati quanti statistici e avremo iniziato a considerare il modello più semplice che posso immaginare, il problema di meccaniche quanti, che è il sistema descrivuto per l'interno di un singolo spin. Purtroppo, credo che già vedete che è un spin quantico, quindi ok, il spin era in 20, era riconosciuto che i particoli hanno intrezzi del fridio, che si chiama il spin. Poi, quando considera l'interazione tra due particoli, abbiamo anche per raccontare l'interazione tra i spin, i spin dei particoli, quindi potete avere una interazione di spin, potete avere interazione tra il spin e l'interazione di l'immaginazione, potete avere interazione tra il spin e l'angular momentum, ok? Quindi se vuoi descrivere il sistema, devi raccontare tutto questo tipo di interazioni. Ma ora, ok, oggi, consideriamo che abbiamo semplificato il problema e abbiamo solo considerato il spin del fridio, quindi abbiamo descansato il spin e il mitone di oggi. E quindi, beh, il spin quantico soddisfa il stesso algebra di l'angular momentum, quindi soddisfa qualcosa di questo, s, i s, l, s, a, n equal i ezochtsi e glanj 1 Tanto breaks by the symbol, iflnj, r sei e quindi s spoken o w, n, In questo caso, è uguale a un altro e è uguale a minus uno se hai un numero di permutationi per ottenere quel particolare combinazione. Per esempio, consideriamo la spina di una alfa. La spina di una alfa significa che puoi rappresentare la spina con un pomodoro di paulimetici. Quindi, la spina può rappresentare in questo modo, la spina è l'alfa di una alfa e poi la spina è l'alfa di un pomodoro di paulimetici. Questa è la spina di paulimetici. La spina di paulimetici è in questo modo, la spina è l'alfa di una alfa e poi la spina è l'alfa di un pomodoro di paulimetici. anche usiamo la notazione, ok, sì, la notazione sigma zero è uguale alla identità, che è 1, 0, 0, 1. Poi puoi vedere che la problematica è soddisfiata, è algebra, ok? E quindi questo simbolo levisita è così che se considera epsilon 1, 2, 3, questo è uguale a 1, e epsilon 2, 1, 3 è uguale a minus 1, perché abbiamo a considerare la permutazione tra le due indice, ok? Epsilon e così on, ok? O le varie permutazioni o le indice, e poi se considera ogni altro elemento della distanza, quindi questo è uguale a 0. Epsilon, per esempio, epsilon 1, 1, 2, quindi quando abbiamo 2 indice uguale a 0 e così on. Ok? Quindi questo è solo l'albero della spina, qualcosa che potete, probabilmente, siano. Ok, ora consideriamo questo simbolo amiltonio, l'amiltonio di una singola spina in un freddo magnetico. Ok? Quindi, per esempio, il nostro sistema è così che la spina tende ad allineare nella direzione del freddo magnetico. Quindi, come possiamo descrivere il sistema? Però se, per esempio, il freddo magnetico è in una direzione spina, ok? Allora, come potete scrivere l'amiltonio di questa forma, quindi l'amiltonio è minus b scar sigma. Allora, quello che mi significa con questo, questo è equals minus sum over all'indice vl sigma l. Perché sto dicendo che questo è l'amiltonio di questo sistema? Perché se compiungi il stato di grano di questo amiltonio, che puoi trovare che la spina è allineata nella direzione del freddo magnetico. Quindi questo è quello che vogliamo descrivere. Ok? Quindi, per esempio, potremmo considerare, per esempio, un'altra idea. E' un amiltonio come questo, h equals 3 sigma z plus 4 sigma x, che è uguale a 3, 4 minus 3. Ok? Quindi se mi chiedo che è il stato di grano di questo amiltonio, potete dialogarci con queste metri per trovare il stato di grano. E quello che vedete all'inizio è che il stato di grano di zero è uguale a questo, 1 over √5 times minus 1, 2. Ok? Questo è qualcosa che, io credo, non è molto bene. E ok, questo è un tipo di amiltonio di grano, o perché potete anche dire che è un amiltonio classico, perché c'è solo una spina. Quindi proviamo a farlo quanti. Che possiamo fare con questo sistema? Possiamo carriare un measurement quanti, ok? Un measurement progettivo. Quindi, cosa passa se prepariamo il stato di grano di questo amiltonio? Quindi siamo in questo stato particolare. E poi proviamo un po' di observazione. Per esempio, in questa spina, in questa direzione. Quindi cosa passa? Sì, sapevamo che se il nostro dispositivo follows il progettivo measurement schema di Neumann, quindi sapevamo che il stato si colapsa in uno dei stati agli Stati di observazione con la probabilità giusta. Quindi, in particolare, abbiamo mescolato questo observer, la spina in questa direzione. Questo è il nostro observer, il Sigma Z. E quindi, quali sono i stati agli Stati di questo observer? Beh, perché abbiamo scelto una base, una base dove il Sigma Z è diagonale. Il Stati agli Stati di Sigma Z è solo 1,0 e 0,1. Quindi, questo significa che dopo la mescolazione, stiamo mescolando il Sigma Z, dopo la mescolazione, abbiamo due possibilità. Uno è il Stati che diventa 1,0 e l'altro diventa 0,1. Ok, questo customare per usare una notazione qui e il Stati 1,0 è sempre indicato per una base di Sigma Z e il Stati è solo per una base di Sigma Z. Ok, quindi ora, qual è la probabilità che il Stati colapsa in questi Stati? Qual è la probabilità? È solo diventando i valori di aspettazione in questo Stati di un progetto sui Stati agli Stati corrispondendo a questo valore agli Stati. Quindi, qual è la probabilità qui? È solo diventando i valori di aspettazione di questo Stati, di questo progetto, app, si, il Stati agli Stati, e qui probabilità di app, la probabilità di down, c'è questo, quindi qual è il progetto sullo Stati, ma qui è il progetto sullo Stati app, è semplicemente diventato questo operatore, che se vuoi rappresentare in termini di 2x2 metri, è solo questo operatore, poi abbiamo il progetto e il spin-down, che è questo, che è l'operatore. È ok sopra? Quindi, solo per rimanere alcune proprietà di progetto, progetto squaredo è uguale al progetto, in itself. Ok, voi already know this, no? I guess. Maybe, ok, I don't know. This is a a more complicated way to write what you have already seen, so this is the absolute value of the of the scalar product between the state and the agli state corresponding to that particular measure, what is it called, lapses and so on, and the same here, so you have this, just the same. Ok, so this is what happens after the measurement and if you, if we know the probabilities of the outcome of the measurement, then we know everything. We can construct for example the mean value of the observed that is measured and you immediately find out that this is equal to the expectation value of that observer and we can also, we know the entire distribution of the probabilities, so we can also compute other quantities like the variance and whatever of the distribution. So this, just to say that if you know the expectation values of all the observer, of all the projectors, you know everything, but say you can express everything in terms of this expectation value. Ok, let us now do the following, ok, so far it's, I guess you already know everything about this, but now let's assume that I, I want to take another measurement after this one, yep, and for example I measure the Hamiltonian, so still you can, you can use the same formalism, so if I was in the spin up and then I compute the expectation value of spin up on the ground state of the Hamiltonian or on the other state, then you compute the probability, fine, it doesn't change anything. But let's now assume that I, I carry out the measurement, but I don't tell you the outcome of the measurement, ok? So I did it, I know what is the measurement, so I know the state, but you don't. And still even if you don't know the outcome of the measurement, I ask you to tell me the probability of whether the state will be in the ground state of the Hamiltonian after the second measurement, ok? So how do you do this? Clearly you have, you must consider both possibilities of spin up and spin down, because you don't know in which state is the system, so you could say, ok, the system is with probability, with probability equal psi ground state projector up psi ground state, it wasn't the state up, and the probability p down, see ground state is in the state down, ok? Now depending on whether you are in the spin up or spin down, now you have different probabilities for the ground state, in particular, yeah. So it means that if you want to compute the probability that now after the second measurement is in the ground state, then this will be given by the probability that the state was up, so which means this is up. And now you must multiply this probability by the probability then the ground state collapses into the aggest states of the ground state once it was preparing the up state. So you have to put here the projector on the ground state, oh sorry, you have to put here the expectation value of the projector on the ground state starting from the spin up, plus there is another possibility, because you know that the state could have been in the down state, so you have the probability that it was down times, now the probability that after the collapse it will be in the ground state. So if I ask you what is the probability, this is what you have to do, fine? And ok, you have to do the same if I tell you, ok, instead of measuring the ground state, the Hamiltonian could also measure at the observable, and you should do this kind of calculation. So you see this more is less elegant than before now, because you have to take into account all the conditional probabilities, so the fact that before it was in a state up or down or whatever, and there is something which is not very nice, and the thing is that well we would like maybe to use the same formalism as before, so we would like to express everything in terms of expectation values, and in particular maybe you could wonder whether you can express this kind of probabilities as the expectation values, well in some state that you don't know, or the projector on the ground state. You could wonder whether this is possible, because ok, we have seen before that we can express everything in terms of the expectation values in the state, and now here maybe we would like to do the same and express this probability in this form. The problem is that this is impossible, you cannot do this, ok, and I invite you to prove it, at least for this simple model, but you can express this probability in a way that the state is independent of the operator you observe in this form. So this shows that our description wasn't complete, so we start considering just this kind of state, quantum states in this form, but now we see that if we consider a situation where we don't have a complete knowledge of the system, apparently we cannot describe everything in terms of this, in a simple way at least, in terms of the state, of the quantum state, and it's ok, it was proposed by von Neumann to somehow generalize the formalism, and instead of considering, and to introduce an operator called density operator that allow to deal with this kind of situations. So the idea is very simple, because ok, we start again from here, so we know that given a pure state we can obtain the probability of the after management by computing expectation value, but now the expectation value of this projector can actually be written as the trace of the projector on the ground state times the operator projector over spin up. Is this clear to you or not, that you can rewrite this expression in this form? Ok, ok, this is just ok, let's assume that you would like to describe this probabilities here using the same formalism, so now we don't know what is the state of the system after the measurement, so I'm just saying ok, let's assume that you can, there is a state that you can write, a site yield, such that you can find the same probability, and the problem is that there is not. You cannot find any state that satisfy the constraint, which is a quantum state, which gives you these probabilities, so it's just a hypothetical state. Yes, yes, we will see in a moment, but ok, in this simple case actually you can prove, just by considering, I think it's sufficient to consider two different observables, you measure two different observables and you realize that it's impossible to define this state independently of the observable that you measure. Ok, we'll see why it's true, so in a moment. Is it clear? Ok. It's ok, what I'm suggesting now is just to rewrite this expectation values in this form, so I've done nothing here. I just can say expression in this form, and now I just want to call this projector here a density operator. So you see that well, as everything can be described in terms of the state, we see that everything can be described in terms of density operator. Indeed, well, we can compute all the expectation values using this operator instead of the state, so I didn't do really anything here. But now, let's see what happens when we perform a measurement. Ok, for example. So let's assume that we measure the, as before, the state in the z direction, and we want to express this probability, the probability of to be in the ground state, for example, after the second measurement. So let's write again what we wrote here, so the probability to be in the ground state is equal to this probability to be up, which is equal to the trace of the, ok, the trace of the initial state. Yes? So we are here. The probability to be in the ground state is equal to the probability of being in the up state, which is this one, ground state, projector on the up state, ground state times the subject, which is the probability that you are, ok, which is the expectation value in this up state of the projector on the observer we measure, on the particular, on the, on the ground state, in this particular case. Last, the other expression is psi, ground state, projector down, psi, ground state, and you are here down, projector on the ground state, and down. Ok, we found this. But now we can rewrite this expression as follows, as the trace of what? Ok, we write this as the trace of psi, ground state, so this term, then here you have projector on the ground state plus this other term, which is psi, ground state, projector down, psi, ground state, time projector on the ground state of down, of down down. So this is equal to the trace of psi, ground state, the up psi, ground state plus psi, ground state, down, psi, ground state, down, down, multiplied by the projector on the ground state. Here I just use the cyclic property of the trace, ok, because you write it here, the trace of A B is equal to trace B times A, you can check it immediately by writing, this expression with the indices. So we found this, you see this expression is very similar to what we have here, because we have expressed everything in terms of the projector on the observables that we, on the agastate where the agastate collapses in the end, projector on the ground state. But now the difference is that instead of having this projector on the ground state, we have this object here. So we can say that the density matrix, we can define this density operator after the first measurement to be exactly this this term here. So density operator after the first measurement, so after measuring, measuring, after measuring sigma z, then we have that we were in the in the ground state, originale, then after the measurement we end up in this in this mixed state described by this density operator. So rho, which is equal to the, ok, just this expression psi ground state psi ground state psi ground state, down down, ok, just algebra. Simple calculation, this is, we found that we can describe the state by replacing this projector by this new state. And you can actually check very easily that this is equal to, can be written in this form, can be written as the projector on the on spin hub times the initial state project hub plus project down ground state psi ground state projector on the down spin. Why is this the case? Because when you write this expression, now you remember that the projector on spin up is nothing but this. It's the same here. This is the projector on spin down. And so you realize that you can rewrite this expression in this form. Ok, all I did is just to show you that if we use this new formalism where we write this potential value in terms of this, of the density operator defined in this way, then we are able to capture also the situation where you lack some knowledge about the system. But there is some differences between this density operator here and this density operator here. Indeed, here we started from a usual quantum state if your state. So what we find is that the density operator is just a projector. It's a projector on that state. And if you know that the projector is eigenvalues either equal to 1 or equal to 0. Ok? Instead if we now consider this operator here, then we easily realize that this is not a projector anymore. And it's simple to check it. We can, for example, compute the square of this density operator and we will find that this is not equal to itself. So it's not a projector. Nevertheless, ok, these two operators share some properties. So first of all, the trace of this project is equal to 1. There is a single state. So if you consider the, so let's consider the two possibilities. So we have the density matrix of the original state, raw ground state, which was equal to psi ground state, psi ground state. And this is what you know very well. Then we have the density matrix after the measurement of sigma z. And then we know that this has this expression. Up, raw ground state, up plus be down, raw ground state be down. Ok, this density matrix, you can see as a trace equal to 1. Right? Because the trace of raw ground state is equal to the trace psi ground state, psi ground state, which is equal to the psi ground state, psi ground state. And because this is normalized, it's equal to 1. Let's compute the trace of this object here. The trace of this object is equal to the trace of the raw ground state, be up plus be down. Now we use this cyclic property of the trace. So this is equal to the trace of raw ground state, be up squared plus plus raw ground state, be down squared. But now this is a projector. So this means that be up squared is equal to be up. And be down squared is equal to be down. So now we have that this is equal to the trace of raw times projector up plus projector down. But these are the eigenstates of an observable. So they form a complete basis of the space. So this means that when you sum all the projectors on the eigenstates of a generic observable, then you find the identity. So this is equal to the trace of raw, which is equal to 1. Okay? So in this example we have seen that the body density operator has defined origin and the density operator after the measurement satisfies this property. The trace of raw is equal to 1. Then okay, there are also other properties. For example, it's easy to check that the eigenvalues of this density operator are always between 0 and 1. How can you do this? Well, in this simple, for example, you can consider the expectation value of a generic state. You consider a generic state, site yield. And then you consider this expectation value of site yield and the density operator of site yield. If you show that this expectation value is always between 0 and 1, then clearly there cannot be eigenstates of raw with eigenvalue smaller than 0, larger than 1. And you can actually check that this is the case. Very simple. I leave to you the calculation if you want. But it's something easy to prove. Okay, but so this was just an example. So now I state the result so that you understand at least the formalism. Okay, so what, sorry. So, you want to write it. Okay, I'm saying that the eigenvalues of the density operator, the eigenvalues are always between 0 and 1. Okay, and in order, let's write the eigenvalue in this way, lambda, both in any case, any case. Every time that you have a density operator, you have these properties. And you can check in both cases that this is true. There is no SC. This is raw, yes. Okay, yeah. Okay, no. And there are no SCs here. Everything is raw. Sorry, yeah. This is a raw, raw. This is closed. Okay. So when you consider the eigenvalues of this object, then you find that they lie in this interval 0, 1. And you can prove it by considering a generic state. Generic. You consider generic. Generic. Side tilde. For generic, side tilde, then you prove that side tilde. Raw. Side tilde. Is always between 0 and 1. You should try to prove it, but it's, it's not difficult. So, sorry. A bit of a factor squared or one value, just a little something. No, you, you, you find this. You find this, that this object is always between 0 and 1. Yes. So, what's the problem? Sorry, I didn't see. So you're saying that, okay, lambda is in the interval 0, 1. This is what I, this includes, this is 0, 1. They are included because, okay. For example, in this particular case of a projector, you have the eigenvalues are equal either to 0 or to 1. So you have these cases. Right. Okay. But in general they are just between 0 and 1. And the trace of the density operator. Is equal to 1. So these are general properties of the density operator. That you can prove. Okay. Okay. But apart from our specific example, what I mean is that let's assume that you, you don't know which is the state with certainty. So I tell you that you know that there is a given probability. P i. That the state is in, that the system is in the state psi i. You only know this. You start from this. Okay. Then how can you describe this kind of system? Well, you can describe the system introducing a density operator. Which has this form sum over all the states of the probability. Probabilities to be in those particular states times the projectors. Times the projectors on the state. This is the claim. And indeed you can prove that if you are interested in the probability of of the outcome of a given measure, you can you can actually compute this probability of the measurements such that the sum operator is equal to the outcome is equal to lambda for an operator. Can be written in this form. So as the trace of the density operator times the projector on this particular state correspond to the eigenvalue lambda of the observable o. I'm saying for given observable o to make a measurement. This is the probability of the measurement. Since we can compute the probabilities, we know everything about the system. So we can we can describe our system using this kind of density operator. Yeah, so we generalize the notion of pure states. The quantum state that you know where this density matrix is not about a projector for pure state for. So you have. If you have a pure state. Pure state. Sign. Then the corresponding density operator is the projector on that state. Rho equal. But we generalize this to other states which are called mixed states. That can be written in this form. But can be written in this other form. So you can define this density operator that describe the mixed state. So you define density operator or which differs from this in the fact that it's not a projector. So in this case you have eigenvalues which are between zero and one. So this case eigenvalues zero or one. In this case eigenvalues zero. And one. Yeah, yeah, this is the projector. Yeah, but the density operator is defined as the projector on the state. So what I mean is that you can use the same formalism but here and here. To describe in this case quantum states pure states in this case more. Complicated situation where you lack some knowledge on the state. But there is a unique formalism that cover both. Both situations. Okay, so. So I introduce this density operator as a way to describe this kind of states where you lack. Some knowledge about the state. For example, in the example you lack the knowledge about the outcome of the measurement. But there are other ways to lack some knowledge. For example, let's assume that we. We have now a more complicated system respect to before it was just one spin. Let's assume that we have a system of two spins. Now instead of one just spins. For example, for example, let's consider this state. Minus 1 over 3. Plus 2 over 3. Down up. Plus 2 over 3. Down down. So first of all, what does it mean? This notation up up up down. I mean that they in the base in the same as that base. The first spin is in the up direction. So it's this equivalent to the vector one zero. And this other spin is in the up direction. So this is our state one zero one zero. And here you have. Okay, now. Maybe I can use different color for that. This is the meaning of the notations. The question is let's assume that we we can only carry out. We can take measurement only on the first spin. So that you have a device and we have two spins. But as a matter of fact, you can't access the information on the second. Yes. Ah, this is a two. Yes. So. Okay. Two. Oh, okay. Two. Okay. So as you see, I have some issues writing each. Okay. This is two over three. Two over three. One over three. You can check it out. This is a is normalized. Yeah. Four plus four plus one is equal nine. The square is equal nine. So this is a this is a quantum state. And the problem is that now we can measure. We can. Measure. Only. Observos. On the first spin. So what I mean. Let's assume. Let's. Let's assume. Consider the situation. Okay. What does it mean? Observos ad actin on trivial only on the first spin. Why this observer can be, for example, the spin. The spin, the first spin in the in the z direction. So you can measure, for example, sigma one z. Or you can measure sigma one x. Or sigma one y. Okay. But you cannot access the other spin. So you cannot measure sigma two z, for example. So you cannot measure this this observer. Okay. This is a two. This is a two x sigma two y. You cannot access these. And you cannot even access this kind of observer. Sigma one x sigma two y. And so on. So you only have information about this. You can only measure this quantity. Because, okay, we lack some information that you should expect that you can describe the system using a density matrix. And so the idea is that, okay, because we don't care, as a matter of fact, what what is the state on the second spin, because we cannot access it. So it is just to describe the reduced space. Just the state of the first spin using a density matrix. Can we do this? Is the question. And the answer is yes. Otherwise, we would have introduced the density matrix. So density operator formalism. So, well, how can we construct this density matrix? So we would like to define operator raw one with a property that every time that you compute the expectation value of an operator, which act only on one, a generic operator, on one, you find the correct probability that the correct expectation value using the original density matrix. Raw of one. And now here I'm putting some tensor product identity. What I mean with this? Because we are focusing on operator. Detecting and clearing only on one spin. This means that they act like the identity on the other spin. They are not doing anything on the other part of the system. So the difference is that this spin, this is like a sigma one x, for example, an example of this operator. This operator is sigma one x. Identity. Is this clear or not? If it is not clear. Okay, yes. Okay. Whenever this condition, okay, okay, maybe it's because there is some I omitted something here. So my fault. When you consider all the operators acting on two spins, the operators can be written always in this form. You can have some polymetrics on acting on one spin. And on the other spin you can have a polymetric or the identity always. So when I say that I can measure all the observers on the first spin, I mean that I'm considering only the observer so that I act like the identity on the other part. Okay. So this means that I take this kind of observers, but I can't measure these other observers. Okay. Because clear, okay, our space is the space of two spins. So our operators act on both spins always. But now what I mean is that I would like to describe the same system, just ignoring the fact that there is the second spin. And so now I want just to consider the spin that act on the first, the operator act on the first spin. And I want to construct the density matrix for the first spin. So this is what I would like to find this operator, this density operator that reproduces the correct expectation values. How can I do this? Okay. Well, first of all I have to choose a basis for the space of two spins. When you have two spaces, like two spins, the basis can be written. If you choose a basis for the first spin, let's consider, let's call this space lambda for the first spin. Okay. Lambda, all right. Lambda, first spin, okay. Which means that lambda can be either up or down. For example, this is a complete basis for the first spin. Then I can consider another basis for the second spin, which is essentially the same. See, but now we know that the state up and down is for the second spin, up second spin, down second spin. This is the basis. Then if you want the basis of the entire space, then we have the space of one and two, which is the tensor product of the two bases. So this means that you have all this possibility. You can have up the first spin, up on the second spin. You can have up the first, down the second, and so on and so forth. So, oh, this is one. Okay, and second, down one, down the second. And I'm using, as I did there, I'm using the compact notation up, up, up, down, down, up. And down, down. Whenever I write this, whenever I write up, up, I mean this. So it's the tensor product of the two states. This is a complete basis. So now, independent of the particular representation, what we have is that when we want to consider a basis of the two spins or a bigger space, we just have to take the tensor product of the, of the basis elements of the single spaces. So, so this means that when you write the trace, the trace means that we have to, to sum over all the expectation values in these elements of the base, because, okay, we know that trace of sum observable is equal to the sum over all the state of the complete basis of the space of psi i of psi i. Well, these are normalized, normalized elements of the basis. This is the definition of the trace. Because we know this, then we can apply this definition to our equation here and see what happens. So we have the trace. Now, clearly these operators actually are defined on the first space. So this is a trace on the, on the space of the first spin. Okay? While here the trace is over both spins. So here we have to use the, the basis of the first spin, which is this one. Instead we have to use the base of both spin, which is this one. So what do we find? We have trace over the first space of row one, row one is defined by definition equal to the sum over all the space, all the, all the elements of the base of, it's called phi. I call this phi, phi, that's right, psi one, the base, row one, row one, i one. Yes? Maybe I consider psi one, psi n? Yes. Yes, so in this case what I mean is that I write i one is a complete basis in the space of the single spin. So the first spin, we consider the first spin, spin, then the complete basis, which is given by up or down. Okay? I just parameterize this by some integer here, which can be zero one or up or down, whatever you want. So this is just a label for the elements of the basis. So it doesn't mean anything this i, okay? So I mean for example, I'm just saying that for example zero one is equivalent to up one and one one is the same as writing down one. If this is confusing, I can just write up and down just because I would like to write something general, it's not so necessary. So what I mean is that you choose your basis, your complete basis of your first spin, which is zero one, one one, and maybe if you have more complete system, you can have also other elements, in this case you have just two. And then you write this expression, problems with this. So this is the meaning of this expression, raise over the first spin. Now let's see the meaning of the second expression. We have the trace over one, two, of, now we have our density matrix row, now we have our observable, that's actually on the first spin. This is equal to the sum over, now we have to consider the basis both of the first spin and of the second spin. So now the basis here, we have the basis this one. In the other case we have the basis i one j two, where i and j can run over all the eigenstates of all the elements of the basis of the first and second spins. So now we have to sum over i and j and we have a generic state written in this form. i one j two o one times identity on the second space. Then you have here i i one. Okay? This is just the definition of the trace. Now as you see this operator excellently only on the state i one. So this means that we can actually already compute, we can move this operator outside of this expectation by respect to the state corresponding to the spin, to the first spin. So what I mean is that you can write this in this form, sum over i of i one i one. And then you have this expression to j j two, rho j two. I forgot something, sorry, sum over j of j two rho j two. One and two means that it's referring to the first spin or the second spin. Yeah, the identity acts only on the second spin. We are considering the operator that acts like o one on the first spin and like identity on the other spin. Okay? But then what I mean because, okay, this operator only acts on the first spin. So it doesn't, it doesn't act in, it doesn't change the second, the second, the second element of the base, the second vector. So this means that we can actually write this expression in this way. Okay? Problems, please? Now we see that we accomplished our goal, no? Because we wanted to write, the, what is it? We wanted to find this operator rho one in such a way that the trace of the operator times the observer is equal as this form. Okay? And now we, a is equal to this, sorry, we want to find this. And now this is what we obtain. If we identify this operator by rho one, here everything is an observer. It should put that everywhere. Okay? So what is this object and rover, okay? How do we call it? We call partial trace. So it means that the, this density matrix of this first spin, they will have to define, is defined as the trace over the second spin of the original density matrix. And with this partial trace, I mean exactly this one. So I have to construct a complete basis of the second space, and then I compute this expectation bias. Just before questions, how do you do this in practice? Because maybe it can be a bit confusing now when you consider partial trace. So let's understand what I mean. The density operator is an operator on the space of two spins, okay? As such operator can be written in terms of the Pauli matrices or the first and second spin. So this means that you can write density operator in a form like this, alpha sigma one x sigma two x plus beta sigma one x sigma two y and so on. You can expand. Where here I mean, yeah, there is identity one, identity two, which is the identity of the first spin. You can expand this operator. Now what I mean when I compute a partial trace. So let's assume that we compute the partial, that we compute the partial trace of this operator. It means that I have to compute this sum over j. I should put a complete basis in the second space of sigma. I compute the expectation value of this operator. The complete basis. Because this is a completely, this act on a different space. I can write this as sum over j. I can move this outside of the expectation value. So I have this equal to sigma one x j two sigma two x j two. Then you realize that this is nothing but the trace on the second space. So this equals sigma one x trace. Second space of sigma two x. We know that the trace of spins is always equal to zero. Why is this so? It's because of the algebra. As a matter of fact, because we know that spin this property as I wrote before, epsilon i j k s k. Now if you take trace of both member then realize that the trace of a commutator is equal to zero. So this means that the trace of the spin is equal to zero. It's been correct. What I mean is that we know even without writing a representation in terms of polymeresis that the trace of this object is equal to zero. So for example, so this means that the trace over the second spin of sigma one x sigma two x is equal to zero. This is what we should do in practice. So which are the the the observables at the Evernon zero trace? Well, the only one where on the second the second part of the operator is proportional to the identity. So you can consider any observable of this four. Sigma one identity two. And then if you consider the partial trace on the second spin, then you have to compute the trace of the identity. And so you find in this particular case which is equal twice the twice. Sigma one, why? So this is what I mean with partial trace in a practical way. Is it clear? What is the? You you didn't get the partial trace. Okay. So the let's see. So you are the second line. Oh, okay. No, the second line is fine. Okay. We can we can see it explicitly. No problem. So let's assume we want to compute this, which is another observable. So we have to compute again sum over j. Or j sigma one X. Identity on the second. The operator the operator X like an identity on the second spin. Now, as before, this act on a different space. So we can move outside without. This is completely transparent to this operator. The vector. It was acting on a different space. So we have this equal to sigma one X. J two identity J two. Okay. Identity. This is just the norm of the of the base. J. So this just counts the number of elements in the basis. In particular, because we have spin one half here, we have two elements. And so this is equal twice sigma one X. Okay. And these are the only operators with a non zero partial trace on the second spin that you can compute. The one with a with an identity here. Because all the other. If you put some spin operator here, you always find zero trace. This is practical. It's okay now. What I mean partial trace. More or less. I hope so. And okay. So we discover that we can. We can describe our subsystem. Because okay. One spin is a subsystem of the two spins. By means of an effective density operator. Which acts only on the first spin. On the on the subsystem. So here there are no more degrees of freedom related to the rest of the system. So we just say we don't know what happens outside because I can't measure what is outside. So I only care about my subsystem. And so I describe my subsystem using a density matrix. And this is how you do it. And this is called reduced density matrix. Because it's reduced to the subsystem. Very interesting. So the general rule is that let's assume that you have space. The entire space can be split into parts. Let's say parts A and the parts B. Okay. And then you you are interested in. For example, this can be let's assume that we have. Some spins, many spins. And you say okay. I can only measure these three spins. I can only measure the other two. So this becomes my subsystem A. And this is what I call B. Because I'm interested in this. Then I say okay. I don't care about what's outside. And I can describe my state using reduced density matrix. Which is defined as the trace over. The rest of the system. Of my original density matrix. You can check that this satisfies all the. All the. Satisfies all the all the properties of the density matrix. So the trace is simple to verify that the trace of this object is equal to one. Because if you take the trace over A. Of raw A. Is a trace over A. Of the trace over B. Of raw. But this is nothing but the trace over the entire space. The trace over the entire space of raw. Which was equal to one original. So it also equal to one now. So this. The new sense in that is a matrix as unitary trace. Still you can easily show that. It has a game values between zero and one. Okay. And in particular if we go back to our. Original problem just to to do it explicitly so that we understand whether. We understood something or not. Let's do the calculation in that case. Yeah so we were concerned. Our state was this one psi zero. Was equal to minus one third up up. Plus to third. Down up plus to third. Down down. So now the question is what is the density matrix. Of the reduced density matrix of the first pin. The first pin. So first of all what I have to do. I have to construct the density matrix associated with this. State. This is a pure state. So the density matrix is the projector on the state. So we start from the density matrix which is the project. On this state. Which is equal to minus one third up. Plus to third. Down up. Plus to third. Down down. Times minus one third. Plus to third. Up and down up. Plus third. Psi zero is a pure state yes. This is why I can write the density matrix as a projector. On that state. This was the state so I wrote. So you have problems with the notation. This is not a mixed state. This is a standard pure state. Because okay in order to be a pure state you should check is normalized. So a pure state is just a superposition. Quantum superposition of different states. So here each one of these is a state. And this is a. This is not a superposition of pure states. This is a pure state. Pure state. Pure state. You can check it. It's just a pure state. It's a quantum mechanical state. So you have a superposition. This means that you have the spin can be in the state up up. Up down or down down with a given probability. But this is a quantum superposition. It's not a classical superposition. It is described by this quantum state. Okay. Because it's a pure state. This is the density matrix projector on the pure state. This has this form explicitly. And now we want to compute the trace over the second spin. What do we do? The tracing means that we should actually consider. We should do this kind of calculation. We should roll one is equal to up second spin. Times roll up second spin. Plus down second spin roll down second spin. Is what we have to compute. Now here how many we have nine terms to consider. And so we know when we act with, when we consider the scalar product between up the second spin. And the state here is down on the second spin that we find zero. They are orthogonal. The only way to find something on zero is when also here you have up. So this means that this star is equal to what? This star. We have a contribution from this one. Because the second spin is up. So we have two up spin. Then you have this, which is minus one third up up. We have a contribution from this one. And we have zero contribution from this one. The same here. Okay. So in practice what do we find if we do this? We find that when you compute this star for this, the first term, then we find minus one over nine times we have to compute something at this up second spin. And the operator is up up up up up second spin. When we consider this scalar product here, this is equal to one times the ket up of the first spin. So we have that this is equal to up first spin, second up first spin for this particular term. Then we have to consider also this term with this term. For example, if you consider this term what do we have? We should compute for example up and we have here up up second spin. And here we have down up. What is this? Again, this is equal to one times the ket up on the second on the first spin. So this is equal to up first spin. And then here you have down first spin. This is what we have to do for all the terms. Is it boring? Yes. In this particular case it can be boring also because let's assume that instead of having just two spins, I have 10,000 spins. And I'm interested only in the first spin. So it can be difficult to create this calculation. So maybe there's a better way to do this. At least when the subsystem is small with respect to the total system. So we can do this if you want. And I can also write the result because I wrote it here. So if you do this calculation and do it, it's useful. So you find identity. Mine or maybe you don't read it. So the final result is identity minus four over nine sigma one x minus seven over nine sigma one z. Over two. Sorry. This is the reduced density matrix of the first spin. Is just the result when you carry out this calculation. This is what you find. But now I show you another way to compute the same. That in some situation can be convenient. Not always, but in some cases. So generally it's okay. Depending on the system, we will consider quantum antibody system. This is a hard problem to compute the reduced density matrix of a subsystem. And this is why I'm telling you different approaches. Because okay, it's in some simple cases you can. In some simple cases you have maybe access not to the entire density matrix. But only on some properties. For example you will see you can have access to some again values. Of the density matrix and maybe it's more complicated to have everything. And so it's generally a big problem. But but consider the entire state can be even more complicated. In many cases and it's useless because you don't. This is some information you don't need to consider. So we should try to remove the information we don't need. Okay, so how can we obtain this result in a simple maybe in a different way? No, in this case it's equivalent. And well the alternative way is to use that the density matrix is completely determined by the expectation values of the observables. So before I show you that given the density matrix you can compute all the expectation values. You can actually do the opposite. Given all the expectation values you can reconstruct the density matrix. And now I show you this alternative way focusing on spin one half chains. So you have just spin one half polymetresis. You have many many spins. And you are interested in the density matrix of some subsystem of the spins. So so the situation is that you have. Many spin from one to n. Let's say. Then you describe your system using some density matrix. Then you are interested only in a part of your system, which is the node by well. We can just order the spin in such a way that we we start from we start counting from the spin inside the our subsystem such way that you consider just the spins from one to. Small hand here, which is our our subsystem A. And here we have the rest B. Okay, so we describe the entire system by raw and we are interested in the new density matrix. In A. So it's simple to prove that this can be written in this form one over two to the end. Some over all the operator that you can construct with the spins, which are string of polymetresis. So what I mean is that you consider alpha one alpha, which is equal to it goes from zero to four. Now it's the moment that you take here the expectation value of sigma alpha one sigma alpha n and here you have sigma one alpha one sigma n alpha n. This is correct, I guess. What is this? I introduced this object at the beginning of the lecture. I told you that I define sigma zero as the identity and then sigma one, two, three are the polymetresis. So here I'm summing over all the possible operators that you can construct in the subsystem. There is, oh, you are perfectly right, thanks. Because you are from zero. So you have a sigma zero identity and a sigma three is a sigma z. So you sum over all the operators that you can construct and so what you can see here is that the reduced density matrix is a linear combination of all this operator. You can expect this because we are considered a complete basis of operators in the space. But now the coefficients, we know also the coefficients. The coefficients are the expectation values of this operator in the original density matrix. Why is this the case? Well, you can immediately prove it because let's assume that you want to you want to compute the expectation value of sum observable, which is a string of polymetresis, for example. Sigma, let's do it in this way. Just a particular example, let's consider. Sigma one x, sigma two y. And let's assume that n is equal to two, for example. Just for, okay, just for example. Then you can plug this expression inside the trace. So you have to consider trace over A of one, okay, over sigma one x. There is rho A here. Sigma one x, there is one over two n, two to the n, times all the sum over all the operators of the trace of rho. Well, let's write this in the short, short notations. Sum over all the operators, there are a string of polymetresis, and the trace of rho times the operator times rho. Now times sigma one sigma x, sigma two y. We have to compute this, for example. But now, when you consider the generic string of polymetresis, times this sigma one x, sigma two y, what do you find? Generally, you find something which has some polymetresis inside, for example. Let's assume that all is something like this, sigma one, sigma three z. This is a possible operator, no? When we multiply sigma three z times this, we have a string of polymetresis. But when we consider trace of a string of spins of polymetresis, we find always zero. Because it is sufficient to have a single polymetresis to make the trace equal to zero, for what I showed you before. Maybe this is not clear, but if you consider the trace of an operator, we check only on one times an operator, we check only on two. This is equal to the product of the traces. So this means that this is enough to have a polymetrix here to make the trace equal to zero, for that particular term. So the only possibility to cancel this operator is that here we have a sigma one x, sigma two y. Because the polymetresis is squared to one. So we have that sigma j squared is equal always to the identity. Yes? Yeah, this is the total. Yeah, yeah, this is the trace over the entire space. And here maybe it's also better to put also the other degree of freedom. This is an identity over the rest of the space. Which is completely equivalent to take the trace over the subsystem A and remove this identity. No, no, no, no. Sorry, sorry, sorry, sorry. This is a mistake. This is the trace over A. Because otherwise I have to change the normalization. So this is actually the, oh. This is row A. This is not row of the entire system. We can do the same, okay, there was this typos. So if I write row here, then I have to put all the identities here. Then the problem is that I have to take into account the trace of the identities. So the normalization is wrong. It becomes capital N instead of small N. Okay, but okay, this is the expression that I would like to use before, okay, row A. Yes, yes. These are N plus one, yeah? Yeah, they are from one to N. There is no, maybe I confused it with this environment. You can actually forget about the space B and just consider the subsystem with the given density matrix, okay? So what I mean is that when you consider this kind of trace is because every element is orthogonal. To the other element in the, using the trace as, with respect to the trace. Then we are selecting just the particular observer, equal to the observer that we want to compute here. And when we select the observer, we have the expectation y. So why can this be useful? Well, because if your subsystem A is small, for example, it's one spin, e the entire system is two to the ten spins, which is one thousand, anyway. One thousand spins, then you, instead of having to compute, to consider one million terms here, you can just compute three expectation values, sigma one x, sigma two x, sigma three, sigma one x, sigma one y, and sigma one z. Yes, you know, this expression is correct in this way. Okay? With a row A. If you, if you want to write what you are thinking, then you could rewrite this as row A is equal one over two to the capital N of sum over alpha one, alpha N from zero to three of the trace over the entire system of row sigma one, alpha one, sigma N, alpha N. Then you have the identity on the rest. And here you have sigma one, alpha one, sigma N, alpha N. But this is kind of, well, useful. It's not really deeper. Yeah. What I mean is that if you want to compute this partial trace, you have to take into account the entire state, no? So in particular if you have something like this, you have to compute many, many terms. What I mean here is that you can avoid it in a sense. And instead of computing all this contribution, you can just focus on the expectation values of the observables inside the subsist, like here. So you have just a bunch of expectation values to compute. For example, for one spin, we have three observables to compute. And we can construct the density matrix of one spin. Yes, okay. Let's, ah, from the partial trace also it's possible. Because we can, if, let's assume that this is, this is now instead of rho A. Let's put here rho. And here put N. Because we haven't, and I was expunged for the total density matrix. Okay. Now let's assume this. Then I will come back to your question. Why is this expression, square expression? Okay. Let's assume that this is the case. Now we want to compute rho restricted to a subsystem A. So the reduced density matrix, rho A. Then we have to trace over B. This expression, oh, this is N now. We have to trace over B. So we have the trace over B of rho. Now we have here, which is equal one over two to the N. We have the sum over alpha one, alpha N from zero to three of, oh god, okay. Over the trace of rho sigma one alpha one sigma N alpha N. And then you have to compute the trace over B of the string of polymeresis, right? Now the trace over B is non zero. Only if here you have alpha equal to zero. So in order to be non zero here, okay, this is different from zero. If and only if, if and only if N plus N plus one N, oh sorry, alpha, alpha N plus one equal alpha N plus two equal alpha N is equal to zero. Because it is enough to have a one spin to make this trace equal to zero. So this means that here you have to consider only the terms with this property. And when you compute the trace, you get the dimension of the space. Of the space of B. So you find two to the capital N minus minus small N. So this is exactly equal to one over two to the N times the sum over alpha one, alpha small N. And then you have all the other alpha equal to zero. So I don't have to sum. And you have the trace of rho sigma one alpha one sigma N alpha N. And then you have all the identity here. And here you obtain two to the capital N minus N which is the trace of the identities times the trace or times sorry, there is no trace now. Sigma one alpha one sigma one alpha small N. You see that this is exactly the expression I wrote before. And then now two to the capital N minus N divided two to the N is one over two to the small N. And we have the same. And we use here the trace of the operator with an identity on B. I've just given by the trace of reduced density matrix on A times the operator acting only on the first space. And you obtain the previous expression. This is just to show you that there is consistency. But I'm telling you. So if you assume that this is correct for the total density matrix, then it's also correct for the reduced density matrix. All right, now why is it correct? The reason is that the if you consider this basis of operator so all the string of polymeresis, they are orthogonal with respect to the trace. So if you have, if you consider two elements with a different alpha somewhere, this is a base in the space of the operator. So if you consider this kind of every element of this base can be represented by the string of the alpha. And then you have this which can be zero, one or three, one, three, whatever. If you find two elements which are different here, you find zero. And the reason is that when you multiply two polymeresis with a different index, you find something proportional to the polymetrics, the polymetrics, but the polymetrics has zero trace. So these are equal to, so if you consider this, the trace of two different strings with different sets of alpha, you find zero. And if you consider the trace of the operator times himself, you find one because the square of a polymetrics is equal to one. It's a future change. I just chose a particular basis, a convenient basis here. I chose a basis that has the nice property that all the elements are orthonormal with respect to the trace of this kind of product. Okay, for spins, yes. In general, I don't know. It's not always possible. Okay, this is possible when you have a finite, I think a bounded space in the local space. So here we have a spin with a finite number of states. Yeah, every time that you have this kind of structure, it's okay. In general, I don't know if it's useful and I don't even know whether it's possible. But at least for spin system, you can. And this can be convenient because, okay, in our case now we could solve the same problem and just say, oh, well, I know that row one is nothing but the identity divided by two, which gives you a trace, a unitary trace of the density matrix. And then you have to put here all the expectation value, or everything divided by two, sigma 1x plus sigma 2x sigma 2x. Oh no, no, no, no, no, no, no, no, no, no, no, no. 1, 1, y, y plus sigma 1z, sigma 1z. So you compute these three expectation values in this state and you find the result. Yeah. These are kind of, these methods are completely equivalent in this case. But maybe in some cases it could be convenient to use one or the other. Okay. Okay. So we started a bit late so I can continue another five minutes, I guess. Is it at least clear that when you consider a subsystem in general you find, you cannot describe the subsystem using a pure state, but you have to describe it using density matrix. And the reason is that you are, you miss some information of the state. You lack the information about the rest of the system when you construct, when you consider just the part. And for this reason, so the state will not generally be pure, but you have to deal with the mixed state. Okay. Now I would like to draw a correspondence between this formalism, density matrix formalism, and classical statistical mechanics. Okay. In classical physics you can, okay, let's just, can I erase everything? So when you consider a classical system what do you, of maybe many particles, what do you have? You generally can, you define the state as a point in the phase space. Okay. So you have some coordinates. Canonical coordinates q and p, position, for example, momentum of all the particles. And then you're, let's assume that these are all the coordinates of all the particles, all the degrees of freedom. This is a conjugate momentum. Okay. And then your state is just a point in this plot, which corresponds to a particular value of q and p. Okay. Now what happens in, for example, when you consider the time evolution, then you have some, this point start moving. In the phase space. Is this clear to you or not? Perfect. Okay. This is what you, what you see in classical system. What happens when you instead consider statistical classical system? That instead of considering just a single point, then you describe some fluid in this phase space. And so you, you describe instead of just the motion of a single point you just go and say, okay, there is a probability, some, some probability that the state is here. There is a different probability and maybe here or here. And then all this state, all this, all this point just move with the time. So this change the distribution. So instead of this kind of state, by fixing the value of q and p, you just introduce a distribution function. You say, well, I introduce rho, a classical distribution function, which is a function of my coordinate, canonical coordinates, and maybe also the time, some more q, so q, p, and also the time. Okay. There'll be. And then you, and for example, if you just consider a single, if you know everything about, about, you know everything about your state. So this means that you just, it's equivalent to say that my, my distribution is a direct delta function. But you say that all the, all the coordinates are fixed to a given value. So this means, this is just a point. If we just consider a point in phase space, it's equivalent to say, okay, I have a, this distribution is completely picked on the point. But in general you say, okay, maybe I, in statistical system, you don't know the position and the velocity of all the particles. Clearly it's impossible because you have too many particles. So the only thing that you can know, you know with some probabilities the, where the system is. And so this means that instead of working with just the standard formalism, Hamiltonian, Lagrangian formalism for the single, for the motion of a single point in the phase space, you have to take into account the motion of the fluid of distribution punch, the change of the distribution punch in the, in the phase space. This is what happens in, in classical physics. And which are the properties of this of this distribution, classical distribution? Well, first of all, it's normalized to one. It's a distribution function. So the total probability should be equal to one. So when you consider the integral over the entire phase space of this classical distribution should be equal to one. Could describe the probability. Then if you want to compute the expectation value of some observable. So what is the expectation value in classical physics? It's just a function of your variable q and p. So you, if you want to compute the, the mean value of some observable which is just a function of q and p, what you have to do is just to write the mean value over this distribution. So, d and p, raw classical q, p, and you have this function of the observable. So this is what you do when you have the distribution function in this phase space to compute the mean value. For example, this could be the energy of the, of the system. So it will be something like p squared over 2m. Some over all the particles plus some, plus some potential. You have to compute this kind of mean value over this distribution. You see that there is some, seems to be, that there is some analogy with with the density matrix. Indeed, if we, if you use some correspondence principle when we replace these quantities by observables, as we usually do in quantum mechanics, so you take your classical quantities and you replace them with the corresponding operators, quantum operator, then you replace the integral over the phase space with a trace over the quantum space and then you realize that here where we are writing nothing else but the, where rules to compute the expectation value satisfied by the density matrix. So the correspondence principle in this case is equivalent to say that you replace your classical distribution in phase space by the density matrix of the system. And so in this way you can recover all the results known from thermodynamics, from statistical physics. For example, if you, if you consider a system in equilibrium at a given temperature, you know that this the classical distribution function is a function of q and p is proportional to the exponential of minus the inverse temperature. Okay, okay t. And then here you have the energy which is a function of q and p. This is the Gibbs distribution the canonical distribution is what you find in classical statistical physics. Now we have the same here. So what happens if you use the correspondence principle and you realize that now the rho the density matrix is proportional to the exponential of minus one over Boltzmann constant times the temperature and here you do the Hamilton. So we can obtain the same the same results valid in classical physics just by replacing the operators the classical quantities with the operators. In general, okay. So but in particular we can also define the entropy of the system. The answer for the system classical is defined as the as the mean value of the Shannon entropy of the distribution of rho. So when you define the entropy up to a well up to some add it constant the entropy is equal to classically classical entropy is well some constant plus the integral over the over the phase space minus the logarithm of the classical distribution classical q is what we have in statistical physics and now what happens in if you want to expand due to to generalize the expression in the quantum domain then we have just to replace the again integral over space space by the by the trace and the classical distribution by the density matrix. So what we find that now the entropy is defined as minus trace of rho logarithm of rho this is the definition of the entropy in the quantum case okay okay now I think it's it's late enough maybe we'll we'll finish tomorrow. Any question about this?