 It's okay, let's start for our last lecture. So this is the last recap. There won't be a recap of this lecture. Maybe, yes, maybe a recap. OK, so the last time we talked about the Libra, it's unbound in spin lattice systems. And we have seen that if we consider two observables, OA and OB, defining different subsystems, OK, that actually don't really only different subsystems, then you consider the commutator between the time of all the operator, ORT, and the operator OB. Then you find that this commutator is exponentially small. If the distance between the two subsystems is large, it's very large. So it is exponentially small in the distance and exponentially large in the time. So if you wait sufficiently long times, then this commutator can become significant, can become large. And this upper bound becomes completely meaningless, because it becomes larger than the maximal value that this quantity can assume. There is a nice corollary of this theorem. And it is the following that if you consider the time evolution of an observable defined in the subsystem A, and you wait given time t, then you can approximate this time evolving operator using just an operator defined in another subsystem S. And provide that the subsystem S is larger than the original subsystem A. And the subsystem S should be sufficiently large and proportional to the time. So if you choose your accuracy of the approximation, then you can imagine that to consider this kind of subsystem that grow. And you can always truncate the observable OOT in observables that act non-trivial only on S, and in such a way that the approximation is good. So we have seen this. And I want to say this is a very powerful result, because it's completely general. So this is independent of the details of the Hamiltonian. We only assume that the interaction should be local or quasi-local. It means that you shouldn't have maybe, I'm not sure whether this includes also Coulomb interactions. But anyway, issue decay is officially fast to zero interaction between the various sides. And we assume that the, or we assume here this, and the fact that the system has a finite dimension locally. So we are considered spin systems. We can put a spin in a given site. So you have a finite number of possible states. This is important, at least for the original proof of the theorem. But you see, it's very, very general. So we can choose our favorite Hamiltonians. Then, OK, we can see that we moved to quench dynamics. And we consider this kind of a situation where we imagine to prepare the system in the ground state of some nice Hamiltonian, like the Hamiltonian of the AC model. And then we consider the time evolution under a different Hamiltonian. So we switch on, switch off some interaction, or we tilt a magnetic field, or we change the field, or the coupling constant, whatever. And OK, in the case of the AC model, a simple situation is when we imagine to change the magnetic field of the transverse magnetic field. And in this particular case, for the AC model, what we find is that the state at the time t can be written in this form. And I guess you have seen this in the tutorial. No, my proof of this. OK, anyway, this is what is not difficult to prove that the state can be written in this form. So here, you have the Bogolubov fermions, all the final Hamiltonian. These are the fermions, the diagonalized h. And 0 is the, OK, here I forgot something. I forgot the sign. This is the sign. This is the vacuum of this Bogolubov fermions, of the b's. We find this way. And OK, we were trying to compute in the end yesterday the time evolution of the entanglement entropy only using the structure of the state. We were interpreting the state as the superposition of pairs of particles that are produced after the quench. And you see these pair of particles have the properties that, therefore, if you have a particle with momentum k, there is a particle with momentum minus k. And moreover, we were assuming, because, OK, this is what happens, for example, in AC model, that the ground state of the model has sufficiently fast decaying interactions. So we can assume that when a particle is produced at a given point, then only the particles that are produced very close by this point are correlated with the previous particle. And then, OK, in our approximation, we were just assuming that the only particle produced from the same, originally from the same point, are correlated. So you can imagine that you have all these pairs of particles. So we know that only the particle with the momentum minus k is correlated to the particle with momentum k. This we know, because we have this structure per structure. But now, OK, actually these two particles could be originated from different points because there is some correlation in the initial state in the vacuum. This correlation, as a matter of fact, is exponentially small. So what we are doing is saying, OK, no, well, we can just assume that only if this particle are originated from the same point, they are correlated. So we are seeing that different pairs are completely uncorrelated, no entanglement between this pair and the other pair. And the only quantum correlations are between a particle with a given momentum k and a particle with a momentum minus k if they are originated from the same point. This is the physical pitch. Is it OK? So can you see why I'm doing this? Just looking at the state. From this, I infer immediately that the pairs are uncorrelated because the density matrix of the state is written, is factorized in the pair, k minus k. It's completely factorized. So if I, for example, compute the reduced density matrix of a pair, k minus k, I find that this describes a pure state. It's completely uncorrelated from the rest. And this accounts for the fact that they are completely independent. Then, OK, again, the fact that they originate from the same point depends on the initials, on the vacuum. We should have exponentially decay correlation. So this is more or less where we stopped. And now we would like to compute the entanglement entropy between a subsystem and the rest in the easy mode. So I think the psi of t of k of cosine i. So the idea was this one. So this is our chain. It should represent the line. So this is the position. It's called x. Then we consider a subsystem, a. And this is the time, this direction. So we have all this. You should imagine all the pair originating from the initial time. And the entanglement between a and the rest is supposed to be somehow related to the, is given by the, is carried by the pairs, such that one particle of the pairs is inside a and the other is outside. This is inside, this is outside. So in particular, this means that this particular particle contribute to the entanglement only from this time to this time. Like I said, this particle contribute to the entanglement from this time to this time. Because when you go, if you consider time larger than this one, then you have both particles outside the subsystem. And so the idea would be that the entropy is just given by the sum of all these contributions of all these pairs. But OK, let's start from the contribution of a pair. What is the contribution of a pair of particles to the entanglement? What can be? OK, we are using here some classic arguments. So there is no mathematical answer to my question, I would say. So there is a mathematical answer, but OK, there is some physical argument behind that. And so the idea is that because we are considered in time, the entanglement between one part and the other is essentially given by the entanglement between one particle and the other, because one is inside. So when I say that the entanglement between the subsystem and the rest is carried by these particles, when one is inside, one is outside. So the idea is that maybe this is just given by the sum of all the entropies of the entanglement between one particle inside and one particle outside. So what is the entanglement of one particle in a pair? So given a pair, we just focus on a single pair of particles, which is described by the state psi, let's focus on the particular k, psi k of t, if you want, is equal to cosine of delta k over 2 plus i e to the minus 2i epsilon k t sine delta k over 2. So this is our, let's focus on this particular state. And this is the vacuum for the pair. So the idea is to compute the entanglement between a particle in the state k and a particle in the state minus k, because this is what essentially what our physical pitch will suggest. So this is the state. The idea will be to compute the reduced density matrix, corresponding to, it is just the vacuum of the two fermions with k minus k. So it's defined as b k 0 k minus k is equal to 0 b minus k vacuum k minus k is equal to 0. This is the vacuum. That is the vacuum of all the particles. This is just the vacuum of the two of these two particles. Not a pair. Yeah. So this is the vacuum of the pair. I could call pair or any type of pair. It's the vacuum of the pair. So we are just considering this simple problem. Instead of considering all the pairs now, because everything is completely factorized, so we can just focus on a single pair and say, what is the entanglement between one particle and the other? And then we know that we have to consider this contribution only when one particle is inside the subsystem and the other is outside. But that is a different problem. Now we can just compute the entanglement between one and the other particle, OK? First step. So I have this state. And so in order to compute the entropy, I have to find the density matrix corresponding to the state k of the fermion. So I have to compute the rho k. Then given rho k, I have to compute the entropy. The entropy of rho k is equal to minus trace of logarithm of rho k, logarithm of rho k. This is what we have to do. How is this defined? Rho k, we define it as the trace over the rest of the system of the total density matrix. Our rest of the system is the state minus k. We have just two particles, k minus k. So this is, by definition, the trace over the state minus k of psi t of k, psi t of k. The definition of the reduced density matrix. Let's see, our system is given by the particle with momentum k. So the rest is the momentum minus k corresponding. You can imagine this as the site of a chain. So this is one side, and this is the other side. These are the two states of our system, two properties. So we are tracing over the second property, the momentum minus k. If you want, yes, you should do it. But this is what I was writing for Britain, yeah? The density matrix is completely factorized when you write it. So as a matter of fact, when you focus on a single k, a single pair, this is described by the pure state. And this is the state, because everything is factorized. So you can write this psi as the product of all these states. It's the tensor product of all these states. Sorry, that's right here. So this is equal to, you can prove easily that this is the tensor product of all this psi t of k. And so this means that we can focus on just one state and compute the entanglement between the particle in the state k and the particle in state minus k. How can we compute this? Well, either we trace, we wish to write the density matrix of the entire system, then we trace out the degrees of freedom. Or otherwise, we compute all the expectation values of the observable. It's the dimension of the space is 2. So there are no many observables. We can actually compute the expectation values. Maybe it's easier. So we know that we have to compute the expectation value of which operator are in the state k. And which operator can you form with the BK? Clearly, you have BK. You can consider this. You can consider B dot k, that operator. And you can consider the operator number, B dot k, Bk. And then, clearly, there is the identity. These are the four independent operators in our subsist. Identity, Bk, B dot k, B dot k, Bk. Yes, because we want to compute the density matrix, the reduced density matrix. You can choose your favorite way. The one way is just to compute the expectation value of all these observables, then we reconstruct it because we have the density matrix from that. So let's see. The expectation value of the identity is clearly equal to 1 because it's a density matrix. So it's normalized. This is equal to 1. Now we have the expectation value of Bk. This is for the calculation. So let's try to do it in an inefficient way. So we have to compute psi t of k Bk psi t of k. This is what we want to compute. Let's call this A and this B. So we have that this is equal to A vacuum plus B minus A. This is equal to A. A is real and B is complex. So A star vacuum plus k minus k B star k minus k. With this, I mean B dot k B minus k 0. This is the meaning of the notation, k minus k is this. So k minus k, and then I have Bk Bk, and then I have the same here, A0 plus Bk minus k. Now let's consider this state. You start from the bottom, B, and I erase the bottom. So no contribution from here. Then when you apply B to the state, here you add k minus k. B removes the fermion with momentum k. So you remain with the fermion with momentum minus k. But now here on the left-hand side, you have either the vacuum or two fermions. So this is equal to 0 as well. So this means that it's equal to 0. OK. B annihilates the vacuum. So no contribution from this. Now you consider the other term. When you apply B, B destroys a fermion with momentum k. In this state, you have a fermion with momentum k and one with momentum minus k. When you destroy k, you remain with minus k. But now here on the left, you have either 0 fermions or two fermions. So when you consider this color product of the two, they are orthogonal. So we find 0. The same for B-dug. No, the same for B-dug because it can be obtained by taking the adjoint of this, the conjugate of this. So you also have the expectation value of B-dug k is equal to 0. We are left with this. Probably won't be from 0. Can I erase this? Because we have already this part. OK, let's do it without further ado. We have now psi t of k B-dug k B-k psi t of k. You have to compute this. This is equal to, again, A star 0 plus B star k minus k. Here you have B-dug k B-k. And here you have A 0 plus B k minus k. Again, when you apply this to the 0, 0, because this destroys the vacuum. When you apply this to this, what happens? This operator counts the number of fermions with momentum k. There is one fermion with momentum k. So this is equal to 1 when you apply to this state. How do you see this? Well, if you apply B-dug k B-k to this state, you see the antico-mutation between k B-k and B-dug k. Then you find that this is exactly equal to itself. So what I'm saying is that this is equal to A star 0 plus B star k minus k. And here you have B. And you have k minus k. Now 0 is orthogonal to k minus k. And so finally, we find that this special value is nothing but the absolute value of B squared. So absolute value of B squared completed. So we have this is equal to 0. This is equal to 0. And this is equal to the absolute value of B squared, which is sine squared of delta k over 2, which can be written as 1 minus cosine of delta k over 2. So now that we know the expectation values, we have to write, the idea would be to write a generic density matrix of the form some coefficient times the identity plus another coefficient, let's call alpha B-dug k plus alpha star Bk, because it should be our mission, plus gamma B-dug k Bk. These are the independent operators. So we are just expanding the density matrix in this space. And then we impose that the expectation values should be this one. And we see what we find. And we determine all the coefficient, lambda alpha and gamma. What do we find? Well, we find that alpha is equal to 0 because of this. Alpha is equal to 0. Then when we impose that the trace should be equal to 1, we have the trace of the identity, which is equal to 2, because you have two states. Either you have the spin or you don't have the spin. So you have the trace of rho is equal to lambda times 2 or equal to lambda. Then you have the trace of this operator. This operator counts the fermion. So either there are 0 fermions or there is a 1 fermion. So the trace is equal to 1. So you have plus, and this should be equal to 1. Then you have to compute the expectation values of B-dug Bk, which is equal to 1 minus cosine delta k over 2. This should be equal to the trace of rho k B-dug k Bk. What do we find when we compute this trace? So when we multiply by this operator the identity, we find the operator itself. And you set the trace of this operator is equal to 1. So that is equal to lambda. Then you must consider this term, multiply by these terms. And this term by these terms is an evolution. So B-dug is a projector, B-dug k Bk. So when you multiply it by itself, you find itself. So again, we have the trace is equal to 1. So if you have this 1 minus cosine delta over 2 should be equal to lambda plus gamma. We have to solve this very complicated system. And what do we find? We find that rho k is equal to 1 plus cosine delta k over 2 plus, minus, cosine delta k Bk. Is that correct? This minus this gives lambda 1 plus k. Then you write twice this minus this, which is minus cosine. I think it's correct. So this is the density matrix. Questions? Please ask. Yeah, all the states. OK, you have just one fermion. Which are the states of one fermion? One fermion, either you don't have the fermion, you are in the bottom, or you have the fermion. You cannot have two fermions because we are given states. They give a moment. So for the Pauli's question principle, so you have either zero fermion or one fermion. So the dimension is two of the states. And now you have to think of all the operators that you can construct with the Bogolubo fermions with these creation operators. Now, clearly, you cannot create more than one fermion. So you can have just one B-dug. The same, you cannot destroy more than one fermion. So you can have just Bk. But then you can also construct something like B-dug B, which is the number of fermions, which counts the number of fermions. Yes? It is, but it's right to this one. Because you know that B-dug is equal to 1 minus B-dug B. So these are the independent operators. Yes? OK, so he's suggesting, if I understand that, instead of doing this, so here we are using the second way to construct the density matrix that I told you some days ago. But OK, we can also simply compute the trace over minus k. So he's suggesting, because we know the states, the complete basis of states, which are either you have zero fermion. Now I can write here. So he's saying, the state is kind of simple, no? And then he's suggesting this rho k can be written as the trace over minus k. So it's equal to the state with zero fermions in minus k, psi t k, psi t of k, zero fermions in minus k, plus you have a fermions in minus k. So you have minus k, psi t k, psi t k, minus k, minus k. So this is what he's saying. And then so you have to compute just the scar product of this psi t with a state with zero fermions in minus k. And so what you find here, that this contributes, because you have the reason of fermions in minus k. In the other case, he said you have a fermion, so it doesn't contribute. And this is for the first term. The second term is you have a fermion in minus k. So this one contributes. And then you can compute the state. It's an alternative way. Maybe it's even faster. It depends whether you are more familiar with the partial traces or expectation values. I didn't consider this, because you know here you have to consider these kind of scar products where this is defined in a space smaller than the other. Maybe some of you are not familiar with this kind of operation. So this is why I prefer to use expectation to compute the expectation values. But that is fast. I agree. So if there are no mistakes, this is the density matrix of the density matrix corresponding to the fermions with momentum k. Now the entropy is defined this way. So how can we compute the entropy now? So what are you using when you say this? Because the entropy is written as a trace. So every time that you have a trace of a function of a matrix, then you can rewrite it as the sum of all the eigenvalues of the function of the eigenvalues. So we have that the s of rock k, the entropy, is equals to the sum over all the eigenvalues, one from the dimension of the space, which is equal to 2, of minus logarithm of lambda i times lambda i. Well, these are the eigenvalues. This is the density matrix. So we have the density matrix. We have to compute the eigenvalues. In this case, the eigenvalues are very simple to compute. Why? Because this is already diagonal, this matrix, in our basis. Because the vacuum of k corresponds to this operator equal to 0, instead the presence of a fermion corresponds to this operator equal to 1. So this is already diagonal density matrix, in the basis that we chose. And the eigenvalues are therefore given by, there is an eigenvalue corresponding to the vacuum. Let's write just to indicate it. There is a lambda vacuum, which is equal to 1 plus cosine delta k over 2. And then you have the other eigenvalue, which is lambda corresponding to the presence of a fermion, a momentum k, which is equal to 1 minus cosine delta k over 2. OK, if you have problems in computing this kind of traces and to see that this is diagonal or whatever, and you are more familiar with spins, you could use a Jordan-Biener transformation to rewrite this Hamiltonian in terms of spins, this Hamiltonian, this density matrix. So you can rewrite everything in terms of spins. And if you prefer so that you can understand whether it's diagonal or not, and compute the eigenvalues, it's completely equivalent. What I'm seeing is that we have that in this space, we know a complete basis for this space, and the base is given by the vacuum of the fermion k, so no fermion, or a fermion, a momentum k. This was our original basis. And my claim is that this operator here is diagonal in this space. Because this operator has a definite value on the state with a vacuum or a single fermion. So it's either 0 or 1. If it's equal to 0, then you have the eigenvalues 1 plus cosine. No, I repeat the words forever. If, in that case, 1 minus cosine. So we have the two eigenvalues of the reduced density matrix, or the entropy. It's simply given by S of rho k. It's equal to minus 1 plus cosine delta k over 2, logarithm of 1 plus cosine delta k over 2 minus 1 minus cosine delta k over 2, logarithm of 1 minus cosine of delta k over 2, which we can call, for example, give me a letter. OK, let's call this G of k. Yes. OK, it's the momentum of the fermion. And we are just considering the state of k pair, a single pair, k minus k. And this is the entropy of a fermion with momentum k with respect to the other, the entanglement entry, when it comes to subsistence. So you see, it's a different interpretation of subsystem. Because when we discussed subsystem originally, we said, OK, you have some special subsystem, some part of your system, which was a physical part, and the rest. Here, instead, we have a subsystem in the momentum space. So we fixed the momentum. We say that the subsystem is just the particle with a given moment. And the rest of the system are the particle with the other moment. So it's just a different point of view, but it is essentially the same. OK, fine. So this is the entropy of one particle with respect to the other. And now the idea is to just count all the pairs that contribute to our entanglement between the subsystem A and the rest, so all the pairs that are such that one particle is inside and the other is outside. And we weight this kind of cross section with exactly the entropy of one particle with respect to the other. So what I'm saying is that the entropy of A, the original subsystem A with respect to this one, is given by the sum of all the entropies of the pairs such that one particle is inside the other is outside. And what is the entropy of one particle with respect to the other for a single pair, this one? So we have to sum all these contributions. Is it clear? If there are questions, ask. So the calculations are simple, but it's important that the general idea is clear. The entropy of rho k is also equal to the entropy of rho minus k, right? This is exactly equal to s of rho minus k, because the state is pure of the pair. The state of the pair is pure. There is no entanglement between different pairs, so when you consider the reduced density matrix of a pair, it's a pure state. So you can treat it independently of the rest. So indeed, this is important. Otherwise, how could we quantify the entanglement between one particle and the other if you find different results here? Yeah, let's say. OK. So we have this, and let's try to carry out the calculation of this to write the question that we have to solve. So what is the question of motion of a particle? How does it evolve? So let's assume that I have a particle here. It is pronounced here, and this is x, and this is time. And I want to follow the termolution of the particle. What is the question of motion of this particle? This is a fermion. It doesn't interact with anything else. It doesn't interact in fermions, which is working. And at a given velocity. So we have just to compute the velocity of the particle, and then this is just a linear trajectory. OK, we'll follow. So what is the velocity of a particle? We know the dispersion relation of the Bogolubov fermions. So we compute the dispersion relation. The velocity of the particle is the derivative of the dispersion relation with respect to the momentum. Is this clear? This should be clear. If you want in classical physics, if you write the Lagrangian equation where you compute q x dot is the derivative of the Hamiltonian with respect to the momentum. This should be a linear equation. So this is the velocity of the particle with momentum k. And then we have to follow the time evolution. So again, we have to count the particle inside and outside. Well, we have to do in practice. So we have to. First of all, the pair can be everywhere in our chain. So we have to integrate over all the position the x from minus infinity to infinity of where the pair is originated. The time t, the particle is at x plus v of k t. This is the x of t, the position of the particle. The time t is the position at the initial time, which I call x, plus vt. What is the position of the other particle of the pair? This is xk. Now we have x minus k of t, which is equal x plus v minus k. But in the particular case of the easy model, we have that the velocity is not the function of k, because epsilon is even. So you have that this is equal to x plus minus v of k t. This is the question of motion of the other particle. Then we want the one particle should be inside the subsystem. No. So we have to put some delta function here. Yes, delta function. Let's write a theta function, a characteristic function, where one particle x plus v of k t should belong to our subsystem A. I'm just using this with notation. Now it's not the mathematical notation. I'm writing what I'm telling you. So the particle at the time t is in our subsystem A. And the other particle should be outside. So here, I have to multiply by the other particle, x minus v of k of t times t. That shouldn't belong to A. The theta is the characteristic function. No, this is no problem. Then I have to integrate over all the contribution from the pair, the contribution from the pair. So yeah, I have to integrate over all the momenta. And I have to put the contribution of a single pair, which is this one. So yeah, this expression times g of k in the x. Because I have to consider that the pair can be produced everywhere. But then I want the only condition that I have to satisfy is that they can be everywhere. But when I consider the time t, there should be one particle inside the subsystem, which is this one. But the mate should be outside of the particle. My initial state is just this superposition of the pairs everywhere. You have pairs originating everywhere in your system. Yes, any time moment. Because it's written here. You have every time, every moment here. You can have every k in each point. Because indeed, this is a translation invariant state. So there is no privileged point. So each point is equivalent to the other. So physically, the idea is that these pairs are produced everywhere in this pitch. Yes? Yes? Exactly. Because this is the structure of the state. So we can always, this is exactly the initial state corresponding to a quantum point g that is in model. When you change the magnetic field, the initial state is written in this way. So this is actually our problem, which is very complicated. I want to stress that if you try to compute this kind of quantity analytically, it's very complicated. And I did it 10 years ago, but it's not simple. But anyway, you find exactly the same. You obtain the exact solution if you use just this semi-classical pitch. Ah, this one? OK. OK, OK, sorry. Theta of x in A is equal to 1. If x is in A, it's equal to 0. If x is not in A. Yes, it's there. And now we have indeed, OK. This was just an informal way to write this. Now we have to write explicitly. Yes, this is the entropy of A. This is the, OK. This is the entropy of the subsystem A at the time t. Now we can define what is the subsystem A. OK, now the subsystem A can be, for example, the spins from 0 to L. Everything is inherent under translation. So this is arbitrary for AMBA. So we can just say 0 to L. And so what does it mean, this theta function? This theta function means that the x plus v of kt should be between 0 and L. But we have also this other condition. The other condition is that x minus v of kt should be between 0 and L. So what does it mean? This means that x minus, sorry, sorry, whatever, should be larger, OK, no, this other condition means that it should be either larger than L or smaller than 0. And so we have to solve this problem. I'll write this integral. I give you four minutes, and then we'll do it. I'm going to sequence this calculation. So we have to compute this integral when we have this condition. So the deposition, x plus vt should be inside A, which means that they should be between 0 and L. And instead, the x minus vt should be larger than L or smaller than 0, OK? We have to fulfill all these conditions in our integration. So we have to find the correct domain of integration. Domain of integration, sorry. Hey, if you're not interested, you can just go outside. And it's your choice. So from this relation, we have that x plus vt is more than L. So this means that the x should be smaller than L minus vt, right? So x should be smaller than L minus v of kt, this. From this relation, we also have that x should be larger than minus vt from this. But now we have also to impose these other conditions. This condition means that x should be smaller than vt. It should be also smaller than vt. So we must remember here. And this means that it should be smaller than the minimum between the two, v of kt. From this transmission, we have that x should be larger that L should be larger than L plus v of kt. And so this means that it should be larger than the maximum between this and this. You are left and right, OK? You're right. So we first have to write this, OK? Yes. You're right, OK. This is an end. And then we have an end with one of the two. OK, sorry, I need to postpone it. So we have this, and we have either we have that x should be smaller than v of kt. So it should be this. But then, OK, what do you see? You have this. What you have? The other condition is that it should be larger than L plus vt. So we have the same maximum minus v of kt and L plus v of kt. And here, you have L minus v of kt. But clearly, this should be larger than this, because it was an end condition. So this means that this is possible only if v of kt is larger than minus v of kt, OK? So indeed, this is possible if theta, if there is a theta of v of k, it should be much larger than the minus itself. So it should be positive v of k. Tell me if there is something wrong. And here, instead, you have the L minus v of kt should be larger than L plus v of kt. So again, it means that now minus v should be larger than v. So it should be negative, plus theta minus v. And here, you have the whole integral in dk, and so on. And the function g of k here, I think we can simplify this expression. Now, how do we simplify this expression? Let's start putting some absolute values, just because I know the result. So because here, you have theta of v, then v is equal to its absolute value. I can write this. Now, v here is negative. So v is equal to minus its absolute value. I have to change the sign. So here, there is an integral in dk of 2 pi, or minus pi pi. OK, of g of k. Now, the integral over x of this domain is equal to minus this. There is no dependence on x in the integral. So we can carry out the integral. So this is equal to integral, minus pi to pi. In dk over 2 pi, theta of v of k. Then you have this minus this, l minus v of k t v of k t, plus v of k v of k t. Then we have the other part, which is theta under the g of k, plus theta of minus v of k. And here, you have l plus v of k t, minus the maximum between v of k t and l minus v of k t. Now, the minimum of this plus this is the minimum of this plus this, and this plus this. Minimum, the minimum between a and b plus c is equal to the minimum between a plus c and b plus c. So I can plug this inside the minimum. So yeah, I can simplify this. And here, I get a factor of 2 here. Then minus the maximum is equal to the minimum between minus the maximum between a and b. It's equal to the minimum between minus a and minus b. So here, I can plug the minus sign inside and transform the maximum into a minimum. So here, it becomes plus, minimum, minus, minus, plus. So now, you have l plus v minus v. So if I write everything, now it becomes equal to the integral dk over 2 pi theta of v of k. Then here, we have the minimum between l v of k t. There is a g of k. Plus minus v of k. Now here, we have the minimum between l plus v minus l, which is l, and l plus v minus l plus v, which is 2 v of k t. G of k. Oh, look. This is exactly equal to this one. So we find the integral dk over 2 pi theta v plus theta minus v is equal to 1 because v is either positive or negative. So minimum between l and 2 v of k t. And we have this function, g of k, that we computed before. This is amazing because we computed something extremely complicated. You don't realize it, but we're going to. So we found this very simple result for the entanglement between a subsystem of l and l and the rest in the easy model. And what is the behavior? If I plot this function, what do I find? No, what should be here? Yes, minimum, yes, it's right. What do I find? Now let's plot this. This is the entropy of the subsystem a at the time t. This is the time. So at time equal to 0, the minimum between these two numbers is 0. So it means that at the end of this, we find 0. Then what happens when you increase the time? So you have to consider the minimum between the l, the subsystem length, and something which is multiplied by a small number. Now the velocity, you remember, is bounded in the easy model or in the spin chains. You know that there is a liberal bits on velocity, which is the maximum velocity that you can have in the spin chain. So this means that as long as the time is not enough for the maximum velocity to be such that l equal to vt, then the behavior is just linear in the time. You have the minimum of this. It's always equal to 2 vt. So what you find is that there is a linear increase, exactly a linear increase, up to which time? Up to the time equal to tf is equal to l divided by 2is the maximum velocity. So here our prediction is an exact linear slope. But then what happens then when you increase the time, then there will be momentum for which this is still smaller than l, and there will be momentum which instead you have the opposite relation. So what happens is that this will start curving like this. And what happens when the time approaches infinity? When the time approaches infinity means that you have to compute the minimum between l and the velocity multiplied by an extremely large number. So this number can be smaller than this only if the velocity is close to 0. So you just get a very small contribution because we are integrating over this region. So this means that in the end you don't see anymore the contribution from these low energy modes. And you only find this term. So in the end at infinity time, what you find is that this is the entropy. What you find is that it approaches l integral in dk over 2 pi of g of k. Yeah, let's assume that the time is equal to 1 million. And the maximum velocity is equal to 1. And then you have that the end. So the velocity is something like this. This is something like this. This is the velocity as a function of k. Now the momentum for which for large time this becomes larger than this are confined around here and around here. So you are a strict in this region, increasing the time. This region becomes smaller and smaller. And so the integral over this region becomes smaller and smaller. And in the end they don't contribute anymore. And so you have always the opposite situation where this term is larger than this one. And so because there is a minimum here, you always pick l. So this is the quality of the behavior of the curve, not the entropy. And what you find that if you do the calculation exactly, you find that this is exactly what you get when you start the limit l goes to infinity of the entropy of a divided by l. Let's try this. So what I mean, yeah. So if you start this kind of this limit, so you divide this expression by l, if you want. So you see that this becomes a function of t over l. Now if you take the limit of large subsystem at even a ratio t over l, you fix t over l, and you take the limit of large subsystem, which also mean large times, then you obtain analytically this expression. So this is the exact result. So we obtain this result in a semi-classical way and we obtain the exact result in the asymptotic behavior. Why this is semi-classical, this approach? Essentially, it's because our particles have a given position and momentum. So we are following the trajectory of this particle. I say, OK, at the given time, the particle is here. Quantum mechanical, this is impossible because you have the Eisenberg determination, for example. OK, so this is a semi-classical calculation. Nevertheless, we obtain the correct result. And then, what is interesting here is that the semi-classical, because in order to obtain these results, we follow the time evolution, the trajectory of the particles. But the trajectory is defined only for classical particles. And it's not classical, but semi-classical because we started from a quantum model and we obtained the correct dispersion relation. We checked that the state is factorized, so we didn't start from. OK, and do we see from here? One interesting part of this result is that we see that it's sufficient last time. The entry becomes independent of the time. So you see that the approach is a stationary value. And this is an example of what I was telling you yesterday that when you consider the expectation values of observables after a quantum quench, then in the thermodynamic limit, the limit of infinite time exists. So you don't have this kind of quasi-periodicity. The limit exists. And all these observables behave in some stationary way. Moreover, as you can see from here, the entropy of the subsystems in this limit, in the limit of last time, becomes extensive, proportional to the subsystem length. We should remind you what happens in statistical physics when you have a finite temperature or whatever. So just from this picture, you can measure. There is some hope that we can describe the system at large times using some kind of statistical way. For example, introducing some statistical ensemble. This is what we have done in the past. OK, but it's very late for any calculation of this kind. And so let's see. I just give you the general picture. And maybe if you're interested, I tell you very briefly, just a matter of two minutes, how can you do this kind? How can you see in the easy model that indeed the limit of infinite time exists? It's just two minutes, what you have to do. So I told you that in order to compute the expectation values of observables, you can rely on the big theorem. Big theorem means that you just need to compute the expectation values of the Majorana fermions. Now you are interested in the time evolution of the Majorana fermions. So you actually compute the expectation value of the Majorana fermions at the time t. I told you, I wrote the relation between the Majorana fermions and the Bogorimov fermions again. So what you have to compute is this kind of correlations. In the end, you find the relations like this, one, for example, 2l minus 1 of t to n minus 1 of t is equal to delta ln plus 1 over n sum over k over e to the minus i l minus nk sine delta k sine of epsilon kt. You obtain an expression like this when you complete the calculation. Then when you take the thermodynamic limit, again, this sum becomes an integral. So you have delta ln plus the integral over dk over 2 pi of the same expression. Now you see that this is an oscillatory function. Use the Riemann-Lebesgue lemma. And so in order to prove that this integral and the limit of large times become approaches 0. And you find the same also for the other correlations. So you always find terms that survive the infinite time limit and you find terms that oscillate. So you have to cancel that. And you prove, indeed, that the limit exists. This is the Riemann-Lebesgue lemma. The fact that this limit exists. Limit of infinite time. This is, OK, first, I can see the limit on Gaussian feet and then the limit equals to infinity in order to simplify that term. OK, this is just to show that there is relaxation in this model. OK, what is the general picture? Indeed, OK, just a moment. Here, it simply means that we are relaxing. All the expectation values are relaxed to some values that we don't know. And then the question becomes, how can we characterize these expectation values? And the picture is the following. So in order to characterize the behavior of the state at very large times, you don't need to know everything. All the expectation values, all the time evolution. You just need to know the expectation values of the conservation laws. So the quantities that are conserved. So if in your system there is a quantity that is conserved, it means that its expectation values is the same at any time. So if you will consider, let's assume that you find a quantity, q, that commutes with the Hamiltonian. Now, let's consider the time evolution of the expectation values of this quantity, q e to the minus i h t side 0. Because this commutes with q, then this is equal to side 0, q side 0. So without doubt, you know the expectation values of this quantity at any time. It is just given by the expectation value at the initial time. And this gives you a constraint to your dynamics. Because you know that whatever the final description will find at infinite time, it should be compatible with all these constraints. Then, OK, let's just imagine that we are in a generic situation. Let's imagine that the Hamiltonian is completely generic, no symmetries. There are no symmetries in the Hamiltonian. So if there are no symmetries, you don't expect the existence of this conserved quantities. So you can just say, OK, the energy is conserved. That is true, because we consider an isolated system. So h is conserved. And this is a derivative respect to the time of the expectation value of h is equal to 0. But OK, we can't find all these other quantities. So the idea is that when we are considered the limit of infinite time, we are somehow losing some information about our system. For example, we are losing the information that could allow us to go back in time. Because we are replacing, for example, all this curve, which is a kind of complicated curve, with just a line, just a value, the asymptotic value. So we are not able to go back anymore. And so the picture is that, well, as a matter of fact, the time evolution is making us lose all the information about the system. And the only thing that we know is this one. It is given by the constraints to the dynamics, given by the conservation laws. So in this case of a generic system, the constraint is the energy. And which is the quantity that measures the information about the system is the entropy. So the conjecture, it was a conjecture, is that you can describe the stationary values at infinite time after this kind of quantum quenches by replacing the state. So here we are considering the time of our mistake, psi t. And now I'm saying, because I'm interested in the limit of large times, so I can replace the state by density matrix rho. And how do I define the density matrix? The density matrix is that operator that maximizes the entropy, sorry, not the entropy. So this is maximum. But you have to follow all the constraints. You have to impose all the constraints. So in particular, you have that the energy should be conserved. So the trace such that the trace of rho h should be equal to the expectation value of the Hamiltonian in the generic case. But this is nothing but the thermostat. They keep the distribution, Boltzmann distribution. Because this is how you obtain, for example, the canonical distribution. You maximize the entropy, and you fix the energy. When you do this, you can, for example, use a variational approach. Anyway, what you find when you impose this is that rho is implies that rho is equal to some constant times e to the minus beta h, where beta is a parameter. Generally, beta is the inverse temperature. Now beta is a parameter which is fixed by this condition. And this is what we call thermalization in closed systems. So we are in an isolated system. We focus on a part of the system, because we are in the thermodynamic limit. And we are saying that I can replace my extremely complicated time-evolving state by a simpler density matrix, which is essentially the density matrix of the canonical ensemble in the generic case. This holds when there are no symmetries in the Hamiltonian. Let's assume now that you have a symmetry like the irrotational symmetry for a spin chain. For example, let's assume that what happens if sum over L over sigma Lz is conserved? Or the number of operators. If you have the number of particles conserved, then you have to do the same as you did in your course of statistical physics. You have to impose all the other constraints here. And for example, here you find something which is the canonical, the grand canonical ensemble. So if you have particle numbers conserved, so you can expect an expression like this, grand canonical. Now, OK, and then you can compute this using this partition. You know how to obtain this or not. OK, generally, when you want to maximize a function given some constraint, you introduce Lagrange multipliers. I don't want to enter into details, but this is what you find. What happens if you now have more constraints? Yeah, you have to put. You see immediately that you have to include here all the conserved quantities of the Hamiltonian. And other cases when you have more conserved quantities than, for example, the Hamiltonian, the particle number, the angular momentum, or the momentum are there. Well, if you read the Landau leaf sheets, then in the first page they said, OK, the integral of the motion are just seven. So the three angular momentum, the energy, then the momentum. But this is for generic systems. If you consider a very special system like easing, you can have many more conservation laws. As a matter of fact, in the so-called integrable systems, like easing, you have infinitely many conservation laws. So it means that here you have to place infinitely many operators. And instead of considering just a thermal ensemble, you have to work with something which has this form, the exponential of minus sum over j up to infinity of lambda j qj. So when you have infinitely many operator here, this is called generalized Gibbs ensemble. So it's a Gibbs ensemble. As far as we have just a finite number of conservation laws, it's called Gibbs. Instead, you have to deal with infinitely many operators that this is called generalized Gibbs ensemble. I think that you have done. And I'm happy that the last words that I said are generalized Gibbs ensembles. So what are the questions? This was a conjecture. And the idea is that when you consider the expectation values of observables, that you have all the, when you expand in the basis of the Hamiltonian, you have all the phases. And you expect that there is this kind of defacing mechanism where all these phases in the end simplify like in this case. And then this means that actually you are losing all information about the system. But the part which is clear, you should be clear and satisfied from the time equals 0. If you want something else, I can tell you that because you are interested in the limit of infinite time, instead of considering the infinity, you could consider just the time average. Although here there are problems of limits. But anyway, if you consider the time average, then you will find that you can describe everything you see with the diagram ensemble. And then you have the eti, if you apply eti, etch, and so on.