 So, start. Do you hear me? Is it working? Is it not working? Now, it's better. So this is a brief recap of the interface in chain, but yes, it's so close. No, this is a brief recap of the easing chain. This is the Hamiltonian de Moda, and we have seen that we can map the spins into Majorana fermions. That means of a journal being a transformation. After the transformation, the Hamiltonian looks like this, so you have some projectors on two different sectors depending on the parity of the number of spin down. And the Hamiltonian in each sector is equivalent to a quadratic Hamiltonian. And then, okay, we diagonalize the model by defining linear combination of these Majorana fermions that satisfy a particular condition of the commutator between the Hamiltonian and these fermions. And using this definition of the Majorana fermions, then we end up with this Hamiltonian. So we have seen the two sectors, and in each sector, the Hamiltonian is non-interacting. So it's just the sum of the number operator for each mod of the fermions. So these are the coupled fermions. And we found this dispersion relash, this mod. So from this, we discuss the properties of the ground state of the model. And we have seen that if h is the magnetic field is larger than 1 in these units, then the spectrum is capped, and the ground state is the ground state of h plus, which is one of the two quadratic Hamiltonians. So this means that the symmetry is preserved, the state is paramagnetic. So this means that the stronger the magnetic field is, and the more the spins are aligned in the z direction. This is the definition of paramagnetic. If instead h is more than 1, then the state is paramagnetic, spin flip symmetry is broken, and the ground state is not the ground state of h plus or h minus anymore, but it's a linear combination of the two. Indeed, the two ground states become degenerate in the thermodynamic limit. And the particular linear combination of these two is chosen by an arbitrarily small magnetic field in the longitudinal direction, which is x. So this is about the easy model. Then yesterday, we investigated the dynamics. Can I erase this? No? This can be useful. This is not. Okay. Then yesterday, we considered the dynamics. In particular, we studied the harmonic chain, and we considered the commutator between the number xj of t and pn, something like this, or the opposite. I don't remember what is that time. And we found that in the limit n goes to infinity. This commutator approaches a number, which depends on the distance between j and n, and the time. And in particular, it was proportional to a Bessel function, which is i Bessel function of 2j minus l omega t. And we describe the properties of this function, in particular, when the time is smaller than the distance. We have seen that this function is very, very small. And so the commutator is approximately equal to zero. And we interpreted this phenomenon as the fact that when you consider the expectation value of 2, the effect of the measurement of 1 of 0 on the following measurement, time t, the first measurement is important only if the new measurement is done at a time which is larger than a particular time, which can be identified with the maximum velocity of the excitations of the model, in particular the phonons, times the distance, the distance divided by the maximum velocity. So what I mean is that as long as your time is large, is smaller than the distance divided by the maximum velocity of your excitations, then you are unable to see the effect of the measurement. If instead the time is sufficiently large, then you start seeing something. And indeed, this special function becomes non-zero. And instead of being extremely small, it just decays as t to the minus one-half. So it's something that you can see the effects. Exactly. What happens when you plot this function? We have something like this, no? There's a function of omega t. For small times, it was very, very close to zero. So if you plot it, you don't realize that it's different from zero. Then it becomes non-zero. I'll start doing something like this, j minus r. So if the time is not sufficiently large, then the two measurements are completely independent. You don't see the effect of one measurement on the outcome of the other measurement. You can see. OK. Then, well, today, first of all, I'll show you that the same, exactly the same results applies also to the trisors field is in chain. And let's see what I mean. But you remember the meaning of this, indeed. The meaning of this is that if you, we showed, we proved actually that if this combinator is equal to zero, then it means that we can measure this observer at a time t without having any, without seeing, without, yeah, without seeing any effect of measurement of the observer of this observer at initial time. You measure p initial time and here we measure x and we don't see the, the, any, any effect of this measurement, or the first measurement. OK. So let's do the same for the, for this model. And essentially the calculations are the same. Do you remember, yes, that when we defined our, we brought the, the, the position x and p as a function, linear combination of our lateral operator, a dagger a, is what we did. And then we realized that when you can see the time evolution of these operators in the Isenberg picture, we realized that this, the, the lateral operator just time evolved with the phase, epsilon k t at e to the minus epsilon k of t. You find the same in the Isenberg model. So you find that if you now compute the time evolution of the Bogolubov ferment, the diagonalized model, so if you compute e to the, e to the i h plus minus t d k plus minus time e to the minus i h plus minus t, then you find that this is equal to e to the i epsilon k plus minus t d dot k plus minus t. OK. The plus minus means that we are in one sector in the other. So the moment k, k plus, our moment are such that e to the i and k plus equal to minus one, we get one plus one. OK. Find this. Analogously, we have b k plus minus of t in the Isenberg picture is equal to the conjugate of this, e to the minus i epsilon k plus minus t, b k plus minus. OK. So in order to have the time evolution of this operator, we just plug this expression there, and then we find that a 12 minus one of t is equal to one over the root of n sum over k s e to the minus i l k s e to the i theta k s over two e to the i epsilon k s t d dot k s plus b minus k s over the root of two and the same for a 12. OK. I forgot here. This is e to the minus i epsilon k s t b minus k s, and here I am using that epsilon k s is equal to epsilon minus k s as a property of the expression, which is related to reflection symmetry of the moment as this power plus or minus i s s e plus minus. OK. So we have this, but now which are the observers in the, in the Isenberg, the spins, not really the minor affirmants. So for example we would like to compute the commutator between sigma z at a given site l at a time t and sigma z at a site n at a time equal to zero. OK. And see what we find. Now sigma z you remember is a, is the simplest operator in the Isenberg model. Indeed, it can be written as the product of two minor affirmants, just two. So this is why I'm considering this operator, just to give you an idea of what you should expect in general. So here we, we have to compute the commutator between i, when you expect, express sigma z in terms of the minor affirmants, you find i a 2 l 12 of t, a 12 minus 1 of t, i a 2 l 12 minus 1. So we, we should compute this. And to compute this, I use this one. So you see you have the commutator between the product of two operators and the product of two. So the commutator of this form, a b c d. And now the idea is to play with this a b c d, my c d a b in order to, to introduce some anti-commutator. Because we know that the anti-commutators between the, between the a's are simple, are just numbers. And so in the end, well, these are just, well, you can follow maybe steps and you, you find that this commutator can be written in this form. So you see in each term, you have always the operator, two operators. And in that case means two minor affirmants times anti-commutators between minor affirmants. Because the anti-commutator, as we will see now, we also, we will commute to compute the anti-commutator between the minor affirmants at the time t and the one at the time zero. So we'll see that they are just numbers. So they commute with everything. So you can provide the expression in this form. Now in our case, these operators a b c d are just single minor affirmants. What is the norm of the minor affirmants? It's just equal to one. And this is clear because we, if you consider squared of a minor affirmants, a squared is equal to one. So the norm can be defined as the maximum eigenvalue. So it should be equal to one. So we, actually, we have this condition in our particular case. And yeah, then I'm using some property of a norm. Then you can show that when you have the, the norm of the sum of terms is always smaller than the, the sum of the norms of terms. And now I'm using that this, the norm of this operator is equal to zero. So you find this kind of upper bound for the norm. By the way, what does it mean that the, you, you have the norm of an operator smaller than eigenvalue? It means that if you consider the expectation value of that operator in any, in any state, then such expectation value is smaller than the particular value. Okay. So you can prove this, this inequality here. Okay, we proved, we proved, okay, we, I showed this and then here is just the property of the, of the norm. The norm of a plus b is always smaller than the norm of a plus the norm of b. Okay, so, and I'm also using the norm of a b. So it's smaller equal to the norm of a, the norm of b for this particular norm. So you, we, we have this upper bound and the idea is to apply this upper bound in our, our situation. Okay. So, well, you see that the ingredients are this anti-commutator between the various operators. In particular, you have always an anti-commutator when you have an operator in the first, on the left of the commutator and operator on the right. So in our case, it means that we should concede the, the anti-commutator in between these terms and let's do. So one operator at a time t and one at a time zero. Let's compute it. So let's see. So can I erase this? So what do you find if you, if you compute this anti-commutator. So you find that the anti-commutator between a to l minus one at the time t and a to n minus one. This is equal to one over n sum over k of e to the i n minus l k cosine epsilon k t. And if you take the limit n goes to infinity, it becomes the integral dk over 2 pi integral from minus pi to pi of e to the i n minus l k cosine epsilon k t. Then the other anti-commutator is between a to l of t and a to the a to n is equal to one over n sum over k of e to the minus i n minus l k cosine. This is exactly equal to the other one. Okay, cosine epsilon k t and goes to infinity e to the minus i n minus l. Okay, as a matter of fact, these two integrals are identical anyway. Then you have a to l minus one t with a to n. This is equal to one over n sum over k of e to the minus i l minus n k e to the i theta k sine of epsilon k t. And in the limit n goes to infinity, this approaches the integral minus pi to pi of dk over 2 pi e to the minus l i l minus n k e i theta k sine of epsilon k t. And finally, a to l d a to n minus one. Clearly it can be obtained from this one. Okay. And this is equal to, I just write the solution, the limit n goes to infinity, which is minus the integral from minus pi to pi dk over 2 pi e to the i l minus n k e to the i theta k sine of epsilon k t. Okay. So, this is what you find if you compute this anticommutator, but it's kind of simple. You have to use always the anticommutation of the operator. So, you can immediately obtain this relation by yourself. And what you see here is that, okay, there is a common structure of these integrals. You have always this phase, which depends on the distance between the operators. And then there is generally this kind of cosine or sine, so oscillatory behavior with a frequency which is given by the dispersion relation. Okay. So, what can we say about these integrals? So, we have, in general, we can, these integrals can be written as e to the i. There is some sine here, S1. Then you have l minus n k. And then you will have another sine plus i S2 epsilon k, epsilon k t. And then there is some function of k, periodic function of k here. Do you see that these integrals have this structure? If you just expand sine and cosine, then you realize that the S1 and S2 are just sines. S1 and S2 are equal to plus minus 1. So, these integrals are a linear combination of these ones. Okay. So, now what do we know about these integrals? Are you able to compute the asymptotic expansion? So, they behave in the limit of large t or large distances. Okay. The basic result is that, okay, in the limit of large time, you can expect that they go to zero. This is correct because this is an integral of something which is oscillating very rapidly. So, for the Riemann Lebesgue, I think, then you have that the integral approaches zero. But we can say something more. So, actually, we can easily compute the way how they approach zero. And there are different cases. For example, let's assume that the absolute value of l minus n, which is a distance, is, let's assume that this is larger than the epsilon prime of k t. Let's assume that we are in this regime. So, what is the consequence of this? Okay. The consequence is that we can rewrite this integral in this way because this is always larger than epsilon prime. Yes? Well, you see, the results here, you have always this phase, which is the same phase here. And you have always either sine or cosine. When you will write sine and cosine in terms of the exponentials, then you find it's actually this, e to the i epsilon k t with two possible sines. So, this is a general form of this integral. So, I'm saying instead of computing the behavior of each of this integral, just consider this generic integral and see what can we say about this integral. What we have is that from here, we see that the g is periodic, is 2 pi periodic, and everything here is 2 pi periodic, this exponential. Indeed, if you add 2 pi here, because this is integer, see that it remains itself and the same here, epsilon k is periodic. So, we have this kind of integral. So, what I suggest you, if this condition holds, is to rewrite this the derivative with respect to k of this phase, which is e to the i s 1 l minus n k plus i s 2 epsilon k t. But then, okay, when you take the derivative of this, then you find s 1 l minus n plus i s 2 epsilon prime. So, we have to divide by this term. So, we have i s 1 l minus n plus i s 2 epsilon prime of k t. So, you see that this is exactly equal to this one. I didn't do anything so far. And then, I suggest you to integrate by parts this expression. Why, okay, why I can do this, because I know that this is always different from zero, the denominator. I'm using this condition now. So, the denominator is always different from zero, so I can divide by it, by this function. So, I can write the integral in this way. Now, I integrate by parts. When I integrate by parts, I have the integral of this term times this function, computed at the external value. But these are periodic functions. So, this contribution is equal to zero. So, finally, I have already the other terms of the integration by parts, which is the integral of the integral of this, which is the phase, the phase of the same phase. Okay, I don't write this. It's the same. And then, I have the derivative of this function, okay, of g of k divided by i s 1 l minus n plus i s 2 epsilon prime k t. So, the results of the integration by part is that I start with this function g. And now, instead of having g, I have this other function. And you see that this other function is divided by t, by the time and by the distance. I could repeat this kind of process. So, I can integrate by parts again. So, I put the derivative here. Then, I have to move the derivative on the other part of the integral. And every time that I repeat integration by parts, I get a factor 1 over t, or 1 over distance. So, this means that this expression goes to zero faster than any power law in the time and distance, in this way. We prove this. And, as a matter of fact, you can also prove that this is exponentially small in a time. And we use this condition. What happens instead if you are in the other regime? You cannot apply this argument anymore. And what you find is that this integral goes to zero, in accord with what I was saying. But now, it goes to zero as a power law in the time. And, okay, we don't compute it. It's not interesting. It's easy. You can use what is called the stationary phase approximation, which essentially means that you seriously expand the phase around the stationary point and you stop at the second order in the expansion. And then you compute the Gaussian integrals. But, anyway, it depends on how to perform the calculation. What you find is that this goes to zero like t to the minus one-half every time that this function is different from zero at the stationary point. Otherwise, it's just a different power law. It's not important. What I mean is that if you have this, you have exponential decay. So this l minus n is this, I guess. So what we find is that every time l minus n, l minus n is smaller than the maximum over k of epsilon prime kt. So this anticommutator that we computed always behave exponential, smaller or equal to some exponential exponential, small, exponential is small. Okay, something which is e to the minus t over tau. Okay, tau finite. Every time we have this condition. If instead l minus n is large or equal to the max over k of epsilon prime kt, then you have that this anticommutator behave like one over t to the one-half or one over t to the three-half, okay, depending on the cases. Anyway, power loss. No, well, you remember that the norm of this, the norm of this anticommutator, sigma z of t sigma n was smaller than the absolute value of the anticommutator, the Majorna Hermes, the four term. Let's show you before. But now we see that this anticommutator are behave exponential, are exponential is more in the time or not. So this means that the, also in this case we see that when the distance between the operators is smaller than the maximum velocity of the particle times the time, then we have that this commutator is exponentially small in the time. So we have been able to, to bound this commutator and we found essentially the same, the same behavior. So you need some time in order, in order for the information to reach the point n, you need to wait some time. You need to wait a time that is needed from, is needed by the fastest particle to reach n starting from l. So you can imagine you have l, this is the position l, this is the position n at the time t, and then you consider now the fastest particle that is moving. So you have all the particles that are moving and here you have the fastest particle. If the fastest particle is able to reach the point, then, well, when you consider this commutator, this becomes substantially different from zero. If instead it was unable to reach the point, then this means that it is exponentially small, exponentially small with time minus distance, something like this. Then the time, then the, yes, this, this, no, the distance, if the distance should be, oh, sorry, you're right, the distance, if the distance is larger, you're right. I wrote the opposite, what did I do? Yeah, it's okay, yes, that's right. So you see we consider two different models, okay, one that is mapped to bosons, the other one to fermions, and you found some similar results here. So it looks like that the spreading of information is bounded. So this is bounded. Indeed, these are two instances of general theorem, which is known as Lib Robinson bound, or Lib Robinson speed, whatever, that indeed is a theorem about the norm of the commutator of two observers in different positions at different times. So the theorem is as follows, can I just erase everything? Yeah, what is it? Here, this, ah, the V, ah, well, or, oh, sorry, I had the possibility to three-halves, three over two. You have these every time that the, this function G has a zero at the stationary point, but the first derivative is not zero, because here I'm not sure, depending on the, on the function, maybe there are cases where this is T to the three-halves. Anyway, okay, can I, can I erase it? Questions, other question? So let's now see the general theorem. Let's assume that we are in a lattice, spin chain, again, or spin actually is not, we don't need to consider chains. Now, just consider graph, general graph, the general lattice. And then you have an interaction between nearest neighbor. So you choose your, your kind of graph here, going to be something like this, for example. So what does it mean? That the interaction is on the bonds. So here you have an, if this is this, the site I, well, how can we call this site? Yeah, the site I, and this is the site J. Then there is an interaction H, I, J, between these two spins. Okay. Like in easing, we add sigma Lx, sigma L plus 1x. This could be H, I, J. So, for example, H, I, J, example, H, I, J equal sigma I, X, sigma J, X. This is supposed to be opening crotch. So we consider a return of the form sum over nearest neighbor sites in this kind of graph. And you have here H, I, J. And, okay, then we select two subsystems. Yeah, so let's increase the number of points. Just to be sure that some of these, okay. So now let's select, let's call this, for example, B. And let's call this A. So you consider two subsystems which are not, which are, which are disjoint. So a site either is in A or in B or outside both. Okay. This is the situation. And then you consider an observable in A or in B. Two observables. So, for example, because I is in A, also J, this can be considered an observable in A. The one that I wrote here, sigma I, sigma J. Now, if we call this K and L, for another observable could be sigma KZ. This is in B. This belongs to, belongs to B. Or you can consider, for example, sigma KZ, sigma LX. This belongs to B. So you just consider two arbitrary observables, one in A and one in B. And we call the observable OA, this observable in A, and OB, the observable in B. Then we consider, then we consider this operator here, OA, at the time t. So in the Asenberg picture, at the time t. So we consider the time evolution of this, OA. And the theorem is the, the theorem is the following. So B, if you consider the norm of the commutator between OA, OA of t and OB. This is always a smaller or equal to a constant, C. The minimum between the number of sites in A and the number of sites in B. Then you have here the norm of the two operator OA and OB. Then you have an exponential of minus B minus VT divided by Xi. So what is D? D is the distance between A and B. So what does it mean? It's the number of edges in the shortest path between A and B. So you should consider the shortest path. You count the number of edges, and this is the distance between the two sets. Then you have the, what do you say? OK, this I told you. And the rest of the parameters which are the constant C, the velocity V, and Xi. These parameters are constants. And they depend only on the, they depend, OK. This depend, depend on, depend on, depend on, depend on the maximum of ij of the maximum. OK, actually the theorem is so general that here you can put some time dependence in the Hamiltonian. So you have the maximum over the time of h. And this depends on this. And on the maximal vertex, the degree of the graph, which is the number of links. The degree of the graph given a point, you count the number of links from that point, and you consider now the maximum number. Anyway, this is a constant in the lattice. You choose your lattice, so in a chain it's just equal to 2, this number. So that depends on this parameter and on the, on the Hamiltonian density. And you see so that the, you can say this is a bound. This is exponentially small. You can see that the, the distance is larger than vt. Like in all the cases that we, that we consider. Otherwise it becomes a trivial bound. So if you are in the opposite regime, this explodes with the time. So it's, it's just an Isenberg picture. This is, well, clearly yes. Because you know that the norm is invariant under unitary transformation. So here you can write the norm of e to the minus i ht tilde times e to the i ht tilde for the same, no, the same term. But now, okay, you consider this term. This is just the unitary transformation. So it doesn't change the eigenvalues of the operator. So this is completely irrelevant. And now when I act with this time evolution, I change the time, this time and this time. It holds for every time. And here instead of having t, you would have the difference of the time. Like you have the difference of the distances. This is called, this is called Leib Robinson. And this v here, v is the Leib Robinson velocity. In the simple cases that we consider, this velocity is just given by the maximal velocity of the excitations. Otherwise, it can be what is just the parameter of the, of the lattice. What this is indeed important is that here we don't have any knowledge about the state. So this is a bound for operators. So this bound holds for any state that you consider. This is why this is extremely powerful result. And this is, yeah, this is, this is Xi is a correlation, correlation length. This is finite. The scale, yes. Okay. It depends clearly on the properties of the, of the system. It depends on H in particular on the Hamiltonian. It is something finite. Yes. No. D is just the, on the exponential d. No, v. Ah, v. No, no. It's independent of time. v is just the parameter. v times d. It's just the parameter that you compute. I give you the graph, the Hamiltonian that you can compute this v. There is this v such that you can write this in equality. Okay. Just a few consequences of this, physical consequences. So one of them is that I, that I like is the following. So let us now, for the sake of simplicity, we consider a spin chain. Okay. To consider this graph. So we have this chain. And these are our spins. Now we have the operator. Our observance in a. Now let's consider the time evolution of this observable. So what happens is that, is that the support of this observable spreads. And then what you can prove is that for a given time, you fix the time t. This is the time. And then you, you define the new subsystem, which you called s. And let's say that the distance between s and a is equal to, it's called L. Then what happens is that you can approximate the time evolving operator or a h of t by a static and observable. So we need a support only on s. So you can actually define some operator in s. Such that these two, the normal of the difference. So the distance between the two operators is smaller. Exponential is small. So it's smaller equal to smaller equal to. There is a constant here. There is again the, the length of the system a. Then you have the exponential of minus L minus v t over psi. And what is this operator? This operator can be defined as the trace over s. Oh, sorry. S is the rest as bar of O a divided by the trace over s of I s. So this is just a formal definition of what I, what I mean. I take the operator time evolves and I remove the tails. I trace over the tails of the operator. So I found in this way an operator that acts non-trivial only in s. But okay, the important fact is that I can find this operator, that acts like the identity outside s, that approximate my time evolving operator a arbitrary well. So I can increase s in such a way that this can be extremely accurate. This is the consequence of this. The consequence is that we see that when we consider time evolution of operators, the evolution results in actually this growing of the support of the operator, a linear growth of the support of the operator. Okay. So you mean this expression exactly. So this expression means that the, let's imagine now to, to, to expand the operator O a of t in a basis of a Pauli matrices, for example, then you will have the sum. So you have all a of t will be given by some sum and here you will have some string of Pauli matrices with coefficient that depend on the time. And these strings can be arbitrary long. Now some of these operators are in s inside s and some of them are not. So what I mean with this is to remove by hand, all the operators where the Pauli matrix is outside s. Every time you define this kind of product, this kind of term, you remove it. And then so you are cutting. You are truncating the sum here. And what I'm telling you is that if you truncate the sum in this way, which is simply a weird way, the very, very rough way to approximate the operator. So if you truncate this, this operator in this way, then you can prove that if the time is sufficiently large with respect to the, the, the, the distance between s and a, then this is a good approximation. Yeah. Yes. Sorry. S bar is the, the complimentary regional. Yeah. So the entire space is s plus s bar. Why this is okay. You can prove it using this. The proof is not difficult to you, but you need to integrate over the arm measure. So I don't, I don't show you how to do it, but it's simple and it was shown rather recently as a matter of fact, it's just a matter of 10 years ago. Why? Okay. Robinson is 60s or 70s. Okay. This result is more recent. This is what I told you. So do you consider this operator and this is spreading. Now, when you consider time evolution, you have to compute the series of commutators with Hamiltonian. Every time that you commute with Hamiltonian, because it's nearest neighbor, then you are increasing the dimension of the operator. And this gives you a bound in all of this dimension, effective sides of the operating crisis. And it's linear in time. So the increase. Yes. Questions. If not, I give you a friend. So here, where is this arbitrary smoke? When you consider L minus, you increase L, because you increase the sides of S. Okay. But okay, the like when it's given by L, it will be tick when you start having this. Okay. So if you want to be very, very close of S, then you have to increase. You have to increase, yes. Why? Why every 4? Why every 4, 2, and 4, 0? Why? Why this bar is not, why this bar does not bunch? And every 4, 2, and 4, 0? When this equal to 0? Well, this is just an upper bound. So this is greater than 0, smaller than 2. So it's okay. It becomes trivial, okay? Okay. So shall we start again? About this year. I start anyway, because I just tell you some important things that will be very useful for the exam, no? No, okay. Well, now we'll finally, okay, at the end of the course, we will start talking about dynamics, okay? Actual dynamics in a quantum antibody system. And because there is no time, we'll only investigate quantum quenches. But first of all, I just want to tell you something about peculiar time evolution in quantum mechanics. So not only quantum quenches, but there is a peculiar, yeah? So maybe if you study some dynamics in quantum mechanics, and you know that there are two exemplar cases. One is when you have your state, psi, and then you, first of all, you have your Hamiltonian, your time evolving, which depends on the time, for example. And then you can see the Hamiltonian that, this is the Hamiltonian, that's a function of the time, that there's a sudden step. So for example, before the time zero, your Hamiltonian is H0, and then it becomes H1. So what happens to the state here? So if this change is sudden, it's abrupt. The state doesn't change. So when you have this kind of time evolution, if you now consider the state psi at the time zero minus, psi at the time t equals zero minus, minus epsilon. And psi at the time t equals plus epsilon, where epsilon is very small. The two states are equal, and the limit of psi goes to zero. When you have this sudden, sudden step. And this is what we now call quench dynamics. We quench a parameter of the Hamiltonian. We change the parameter of the Hamiltonian suddenly. This is quantum quench. Generally, okay, in practice, you consider step functions, but in experiments or whatever, you have always some smooth function of the parameters. But as long as the region where the Hamiltonian changes more in comparison with the typical energies, then you can assume that it's just a step function. In other words, you can always assume that if this is sufficiently rapid, then the two states, the state doesn't vary much during this time step. Not for this state, for the Hamiltonian. Ah, this continuity. This you can prove using the time evolution in quantum mechanics. Because you know the time evolution of a red rock to here is just given by e to the minus i h zero t. The time evolution here becomes, okay, yes, e to the, let's start from which time, from time t zero t minus t zero. And then here the time evolution operator is e to the minus i h zero t zero e to the minus i h t. Let's write this minus t zero. So t plus zero plus. Anyway, so you can write the time evolution operator and you realize that the state doesn't change you. Okay, this is one possibility that we will consider. We'll investigate this kind of dynamics. But I just want to tell you that there is another remarkable kind of time evolution, which is the adiabatic time evolution, which is the opposite situation. When you have your Hamiltonian, which depends on the time on the time, but now it's very, very slowly with the time. Okay. So and what happens? What happens is that generally let's assume that you start from the ground state of the Hamiltonian. Then you change the Hamiltonian very, very slowly. For example, let's assume we consider our is in model. Then we start from h very large infinity. So for example, we could start from the state with all the spin aligned the up direction, which is the ground state. So we start from that state and then we decrease the value of h very, very slowly. Then what happens? That you remain in the ground state of the model. If you if in the limit of very extremely slow change of the parameters. This is true as long as you don't cross a critical point. So as long as there is a gap between the ground state and the first excited state, then you can you can prove this theorem also in the case of a quantum antibody system. And this was done actually about six months ago because okay, the first proof of the first proof of this theorem is very old, but he didn't apply to the quantum antibody systems. Recently was also proved in this domain. So every time that you have a gap, then you have this kind of what is called adiabatic theorem and you remain in the ground state of the Hamilton. So the state changes, but changes remain in the ground state of Hamilton. If there is a face trans if there is a critical point. So if you cross a critical point, a face transition, then you the theorem doesn't hold anymore. You don't have a gap and you can produce defects in your state. It won't, it won't be the ground state of the model after you cross the critical point. The critical point, well in this context is just that you, you consider the gap between the ground state of the first excited state. The yes, what happens that if you consider the velocity at which you change the parameter, you should, you should, should relate this velocity to the gap somehow. So you have an inequality between the two. If the velocity is sufficiently small with respect to the gap, then you, you, you can use the adiabatic theorem in the opposite condition. You can't, and maybe you can use the other, the other pure in the opposite situation. Okay. For this one. Oh, well, I could give you some names. If you're interested in this kind of dynamics, there are interesting results. For example, you could consider the Landau-Zener problem. Formula. So this is an exact expression for the time evolution of the Hamiltonian of a, of a two level system when you have the, the parameters depends linearly on the time. So you consider Hamiltonia of the form h of t equal, for example, something like, something like a delta, sigma z plus omega t sigma x. And then, for example, you can play with this parameter omega and you can make omega very, very small in such a way to use the adiabatic theorem. So, and you can solve this, this problem exactly. It was solving, it was solving. I don't remember now. Okay. I don't remember when, but there is a matter of more than about 100 years ago. Okay. Anyway. So, yeah. Okay. You, you can find the exact evolution. This is not trivial. So if you try, probably you don't succeed. But if you, if you find the papers and you understand the papers. And this was done by Landau-Zener, also Majorana. Majorana is a 1932, I think. Okay. Anyway. And, yeah, I like the proof. Majorana's proof. This problem, then I give you also another, just a keyword. And another keyword is Kibble-Zurek. Let me check. Kibble-Zurek mechanism, which describes the time evolution across a critical point. So when you can't apply the adiabatic theorem anymore. So if you're interested in this kind of adiabatic time evolution, these are two, I think, two, two, this phenomenon, without doubt, to check. And also this solution, this exact solution, which is extremely interesting actually. And it's also useful to understand this mechanism. This is actually what was my plan, originally. Also to talk about this. But, okay, we are late, so I won't do it. Okay. That's it. Let's, let's consider now our quantum-french dynamics. So what we, what we do? Okay, this kind of study started with von Neumann in, okay, today. Something like 1929, anyway. And, but then for a long period, they, they weren't study anymore. And the reason was that people thought that it was just an academic problem. Okay. So, and the problem was that you, for example, consider some Hamiltonian, H naught. You imagine to prepare the, your system in the ground state of the Hamiltonian. So you lower the temperature. So you put in the ground state, the Hamiltonian. And then you say, well, now I changed some parameter of the Hamiltonian like a magnetic field. I can tilt a magnetic field or I can change some parameter. If you want you, you add some interaction H zero plus some V, some parameter G. And then you, for the time evolution with a new Hamiltonian. So if you consider this, this state at the time t, which is equal to e to the minus I H size zero. So you see, this is kind of a simple work. At least the setup is something extremely simple. And, but okay, when you start considering Hamiltonian or quantum anybody system, you can imagine that this becomes a very complicated problem. Just to allow, when I show you, yesterday or the day, the lectures before, when you expand this kind of time evolution in terms of the logistics of the Hamiltonian, you find that you have to sum over all the other states, you have all the spaces. So in practice, it becomes an extremely complicated problem, to solve the time evolution. But nevertheless, okay, the, so what happens when the problem is complicated, if you have a reason to solve this problem, then you do it. But if you don't find any reason that we don't do it. And so for many years, people just didn't care about the time evolution because there wasn't any reason. But then, okay, in the last, after 2004 or something like this, there was a big progress in the experiments, in cold atoms. And so it was possible to investigate this kind of dynamics, even considering a system of 30, 40 spins. So it means a, well, one to many body systems. And you can, in this experiment, you can actually, you have full control of your Hamiltonian parameters. And you can change these Hamiltonian parameters very, very quickly. So you can investigate this kind of quench dynamics. And so theoretician were so happy about this because now we could study this problem again. And there is some reason to study it. So why is this interesting? This is interesting for the, I will say for the unusual behavior that you find. You wouldn't expect. Indeed, we have seen yesterday that when you consider the time evolution of quantum mechanics, we have always this kind of quasi periodicity. Okay. Now, what happens, just a fact is that in all the system that you generally consider, so you, you choose physically relevant states and Hamiltonians with the sufficiently local interactions. Then you find that the, in the system is sufficiently large. All the observables, all the expectation values approach stationary values. So they, they are not quasi periodic. But if you, for example, study the easy model and you say, well, I start from the ground state of the easy model with a given magnetic field, for example, infinity, so all spin up. I start with this ground state and then I consider time evolution with the Hamiltonian of the easy model with h equal to two in the paramagnetic phase. Then you start the time evolution of sigma z, sigma z as a function of the time. Now at the initial time, because all the spin are half, then we have this equal to one. Then what you find is that, well, some oscillations here, without doubt, and then, just if you compute, this is something that you can compute, you find that become stationary independent of time. And this is not just the property of sigma z of this observable. It's a general property. So all the observables that you can consider all the observable with a physical meaning. So you observe that you can write in terms of the Pauli matrices in some region. They share this common area. And for a long time, well, we were interested in, because, I say we, because it was interesting this kind of dynamics in the past years. And our interest, our focus was on how to describe this kind of stationary behavior. And because, well, in general, maybe in simple systems like the easy model, we can compute the time evolution of observables, but in general, we can't because the models are too complicated and sometimes we can't. So the question was, is there some simple way to describe this kind of expectation values? And the answer was, apparently, yes, so you don't need to follow the time evolution of all the observables, okay, the entire time evolution, which is an extremely complicated problem, but there is some prescription to obtain the expectation value at large, please. Okay, now, today, I, okay, we will discuss this, I think, tomorrow. And in particular, we will compute the time evolution of, maybe sigma z, I guess, following the easy model, okay, but today, I would like just to introduce quantum quenches in the easy model and then to study a different problem, which doesn't need the calculations, okay, we, that can be solved using some semi-classical arguments. Okay, so first of all, how can we solve this kind of problem in the easy model, our test model for this, in these lectures? So we, was the, was the procedure. So we have h0, our initial Hamiltonian, which is minus j, sum over L, sigma x, sigma L1x, plus h0, sigma Lz. So we, we consider Hamiltonian of this form with some magnetic field and this is the Hamiltonian that describes the time evolution before the quench, the pre-quench Hamiltonian, and we prepare the state in the ground state of this Hamiltonian, okay? For the sake of simplicity, we consider the paramagnetic phase. Paramagnetic phase means that we are in the ground state of one of the two sectors, h plus, because I don't want to consider complications. So just, as assumed, h0 and now h would be larger than one. So paramagnetic phase. So this is our h0 and h is given by this. We know the ground state of h0. The ground state of h0 is the vacuum of the Bogolubov fermions associated with the, in the sector, in the Neboszwald sector. Anyway, it's the vacuum of these Bogolubov fermions. So we know that the initial state, side 0, is the vacuum. If you want vacuum plus, but it's not, it's irrelevant. And this is defined as the state such that all the BK applied to the vacuum is equal to 0. And then we want to start the time evolution and we know from one year, one hour ago that the time evolution of is simple for the Bogolubov fermions of the Hamiltonian. h, no h0. So we show, well, we use, actually, these results before, that if we write let's call this 0, Bogolubov fermions. These are the Bogolubov fermions of h0. Bogolubov, Bogolubov fermions, h0. I said 0 here. But now we consider the time evolution generated by h. So we should consider the Bogolubov fermions of the Hamiltonian that time evolves easily. So we know that BK at the time t is equal to e to the i epsilon kt BK. This is where the time evolution is. These are the variables where the time evolution is simple. But our state is defined in terms of the other variable. So we need to find a way to write the initial state in terms of these variables. How do we do this? What's the procedure? So we know the formula. The formula is to to write the Bogolubov fermions in terms of the Majorana fermions. Remember, the Majorana fermions are defined independently of the magnetic field. They were just, probably, they are the Jordan-Pingna fermions. They give a spin chain. You can define those fermions. So what I mean, you can write BK as a function of the Majorana fermions. And we know the relations. We know the relation both if you consider BK of the final Hamiltonian of the post-Quentia-Miltonian of the Bogolubov fermions of the pre-Quentia-Miltonian. You have two different relations. Both of these relations can be inverted and we wrote the inverse relations. And so we are from this, we can express the Majorana fermions as function of the Bogolubov fermions, B and B. We have these relations. They're same here. They are function of B0, that B0. So how can you express the state in terms of the other fermions? Well, you have to, these are our Bogolubov of the initial Hamiltonian. You express them in terms of the Majorana fermions. Then you express the Majorana fermions in terms of the post-Quentia-Bogolubov. And you can decide. Yeah. No, because the Majorana fermions just defines a product over J of sigma z times sigma plus. So it's completely independent. They are independent of the of the magnetic field. They have been defined for any spin chain without specifying the model. So this is the procedure. Okay. Okay. What I'm saying is that you have this relation. Actually, here you can consider also the time evolution of the, okay. No. First, you can write an expression at a time equal to zero. And so using this procedure you find B, you find B-dug zero K as a function of B-dug K, B-dug Q or the B-dug and B. You are able to do this. Then you say, I'm interested in the time evolution of this operator. So I have to consider the time evolution of this. But the time evolution of this is trivial. So you have just to put the phase and to plug the phases here. And then you have the time evolution then of the of this Pogolub, the original one. Okay. But here we are talking about Bogolub fermions and we need to write the state. How do we do this? What is the state in terms of the Bogolub fermions? So what does it mean vacuum? Vacuum means that there is something which is annihilated by all these destruction operators. And so, well, instead of the vacuum let's consider the density matrix of the vacuum that is this density matrix, 0, 0. If we just consider a single fermion, can we write this operative in terms of the Bogolub of the fermion? What is this project? For a single fermion, if you adjust the fermion, this is equal to B, B dagger. Why? Why I am claiming this? Can you see it? So this should be an operator such that every time that they multiply by the fermion B, it should be equal to 0, and this satisfies this condition. Every time they multiply by B dagger on the right, it should be equal to 0 and this satisfies this condition. This should be a projector. So if I multiply this by itself, I should find it in itself. If I do here, B, B dagger. B, B dagger. I use the anti-commutations here. So I find that this is B, B, B dagger. B dagger minus B. Oh, this is B dagger B. Minus B squared, B dagger squared. But these are fermions. This is equal to 0 and this is equal to 1. So this is the function of a single fermion. Now you can convince yourself and maybe also during the tutorials you will discuss this that the vacuum if you have k fermions is written simply as the product or the vacuum of all the fermions. Maybe not really surprising. So once we write, we wrote the, this is the initial state. Now we have to consider time evolution of the state, right? Because we want the time evolution of the state at the time t. So the density matrix at the time t is equal to e to the minus i ht vacuum, vacuum e to the i ht. Well, this is B0. You can also state of the original Hamiltonian. I have this. So what do I have to do? I have to write B in terms of B0 in terms of B. So I have to write e to the minus i ht. And then this would be the density matrix at time 0 as a function of B dagger and B. e to the i ht. And this is equal to rho 0 of e to the minus i hk t beta k e to the i epsilon kt bk. And we have the expression for the density matrix at the time t. This is what we have to do in the easy model. So it's not very complicated. It's just two steps. And we can get the time of all this state. And then we give in the density matrix. We can compute the expectation values of all the observables. And this is what we are going to do tomorrow. Just completing these calculations and then computing the expectation values of some simple operators and then discussing the infinite time limit. But now I just want to use some partial result from here. In particular, I claim now that can I erase this? Yeah, it's 115. So this time, OK. So my claim is that we started half past 11 this time. My claim is that when you write psi 0 in terms of the Boko-Lubo fermions of the final Newtonian, you obtain this stretch. This is equal to the product over k of cosine theta k minus theta theta 0k divided by 2 plus i sine of theta k minus theta k theta 0k divided by 2 B dot k. OK, now I'm not sure about this sign, but I don't know. I'm claiming this where each other is theta k. Do you remember the definition? It was something like e to the i theta k was equal to h minus e to the i k divided by square root of 1 plus h squared minus 2h cosine k and the same for e to the i theta 0k but now with h naught instead of h. Remember, we have this kind of we define this theta k when we solve the easy model. So these are just the definition and then you can, I think you will see this in a tutorial that you can provide your initial state in this way where this is the vacuum of the fermions. Yes. So the bogorubo fermions of the post-quench Hamiltonian is part of h. This is the vacuum of h and these are the bogorubo fermions of h. So physically, what does it mean this? What is the state? So it means that the state is, you see, is factorized. So you have that for each momentum you can describe it as as a superposition of different sectors. For given k, you can have either the vacuum which is this term or you can have a pair of particles, k minus k. So this is ascribed to possibilities. You fix k and then you say, okay, either there is no particle or there are, there is a pair of particles and you can do this for any, for all the case. So in other words, the state consists of all this particle. You should imagine that the initial state is just a bunch of moving particles with different energies, with different velocities and the only properties that you can and means from here is that either, that this particle come in pairs. So every time that we have a particle in the momentum k, you have another particle in momentum minus k and because the dispersion relation is symmetric, then they have opposite velocities. So this is our initial state. So this is the physical picture of the initial state. You imagine all these particles in this, in this chain, they are moving and they, they are originating at every point of the chain because there is translation invariance. So you should imagine all these particles moving everywhere with different velocities and organizing pairs. This is our initial state. And now I would like, okay, there is not much time, but I would like to use this kind of picture to compute the time evolution of an observable. It's kind of surprising because we will find the correct result without doing any calculation. Just thinking of the state in this way, just with this mixture of particles moving around and we will compute the time evolution of the entanglement entropy of a subsist. Okay, this is the goal. So how do we do this? First of all, the, okay, the idea is the following, essentially. You have your chain and I told you that you, okay, if you could see now the time evolution here, what you find is just this, these operators evolve with the phases. So here you have just e to the 2i epsilon kt. This is just the time, then this is the state of the time t. So you fix the time. At a given time, what do you have? You have all these particles that are starting. Maybe I can write this. This is the time. This is our chain. So the initial state is just this kind of superposition of particles, pairs of particles that start moving with different velocities from zero, pairs of particles. Now you follow the time evolution of these particles and then you can see the time t. This distribution of particles at the time t and you focus on a particular subsystem. This is our subsystem s or a. I call it always a, a, our subsystem. And then I'm interesting in the entanglement between this subsystem and the rest. So you see that all these pairs of particles are completely independent one from the other. Indeed the state is completely factorized here. So in particular this means that a particle with momentum k is completely uncorrelated. There are no correlations. Quantum correlation between the particle with momentum k and the particle with momentum k prime. If k prime is different from minus k. The only correlated particle are the one with the same momentum in absolute value. So the pair is correlated. Only the pair. This, this particle here is completely uncorrelated with respect to this particle here. So what happens is that you must have the same, the opposite momentum to have a correlation. And moreover, if we assume that the initial state has sufficiently fast decaying correlations then they should be originated in close by points. And in this picture we can just assume that they are originated in the same points. So we are saying that the only particles that are correlated in the system are the ones that are produced at the same point. And they have momentum k and minus k. These are the only correlation. And we, we are inferring this just looking at the state, the structure of the state. Because we see that the different k are completely uncorrelated here. And then we see that there is correlation between k and minus k. Then we use the initial state which is this one as sufficiently, as exponentially decaying correlation. So we can essentially, we can assume that the correlation are zero, are not zero only if the points are very close to one another. And in particular, we assume that in this picture that they are not zero only if they originate exactly at the same point. This is the picture. So just using this, how can we quantify the entanglement? The entanglement is just that you have to count all the number of pairs such that one particle is, if you consider this pair, such that the one particle is inside our subsystem A and the other is outside. Because we want to describe correlation between this part and the other part. So in order to, the correlation will be given actually by, we will have contribution from, contribution to this correlation from all the pairs such that they connect the inside of the subsystem with the rest. Is this picture clear or not? If it's not, I ask questions. Yes. Okay, the vacuum is just, we are in the paramagnetic phase or anyway we are in the gap at the case, non-critical system. In non-critical system, correlations decay exponentially. So this means that if you have two points that are at the distance larger than the correlation length, sufficiently larger than the correlation length, the correlation is practically zero. They are independent. Now what I'm saying is that, let's assume this correlation length is finite and I didn't tell you the scale of this axis. So maybe it's so small here, the correlation length that you can distinguish the, the fact that there's two particle can be at a distance psi order of the correlation length. So I'm just saying, let's assume the correlation length is zero. If you want, this is exactly, this is actually equal to infinity because now then you have all spin up and the correlation length is actually equal to zero. Well, because I would like to see, to study the time evolution of the entanglement between the subsystem and the rest after a quantum quench. This is the idea. We, we want to quantify the quantum correlations between the subsystem and the rest to study this kind of time evolution when you change the entanglement parameter. This is the idea. What do you like? I would like to do. And I would like to use this kind of pitch. So we see that all the pairs, pairs are uncorrelated, that the particles with opposite momentum are instead correlated and we are using the properties of the initial state when we say that these two particles should come from the same point. Assuming that the correlation length is zero. So okay, the task is to count the number of pairs such that one is inside and one is outside and then we will say that we will see actually that the entropy is actually proportional to this number in a sense. It should be weighted in some way that we don't know. We'll see later. But the general value is given exactly by this pitch. Can you repeat? Yes? Yeah, so there are three possibilities. We can have pairs correlated here. They didn't reach the subsystem so they don't contribute to the entanglement with this and this. Then you have pairs that are here. Both are inside the subsystem. So they don't contribute to the entanglement between this and the rest. The only one that contributes is when you have one particle inside, one outside, which can be this or can be this situation. That's right. So the purple one contributes. The purple is invisible. The red one contributes. The other don't. Actually, at this time. Okay, so this is the first thing that we'll do tomorrow. So we'll see the consequence of this picture. We will compute this time evolution, assuming that the entanglement is proportional to some cross-section of the production of this pair of particles distributed in this way. And then we will start the time evolution large times. Okay.