 So, good morning, I think we can start. So, this is just the recap of yesterday. And, well, first part of yesterday, let's see. So, we are starting the transverse field is in chain. This is the understanding of the model. Okay, we are imposing periodic boundary conditions, which means that you have to identify the site one, site N plus one with site one. We will see that the operator, this operator pi z commutes with the Hamiltonian. And we discuss the consequences in pa. Okay, this one. So, we define the Jordan-Bogen transformation, which allows us to map the spin operators, if you want to have operators into spinless fermions, okay? For the sake, just because it's convenient, we also define linear combination of this, of this spinless fermions that satisfy the algebra of Majorana fermions. So, now we have the anti-committed, which is equal to times the Kronecker delta. And notice that these Majorana fermions are mesh, okay? And, essentially, once we have this kind of structure of fermions, we can construct a Fox space. So, a space defined by the, an arbitrary number of fermions in the system. And this Fox space is completely equivalent to the space of the spins, of all the configuration of the spins. As we showed the day before yesterday. So, we start, define the vacuum as the state with those spin down. And then we show that when you apply this C-dug, then you are rising the spins. So, we have a complete equivalence between the two representations. If we rewrite the Hamiltonian in terms of these spins, we obtain this kind of structure. So, we have here two projectors, the projector to the space corresponding to the eigenvalue plus one of pi z, and the projector on the other eigenvalue, minus one. And inside this particular sector, the Hamiltonian is equivalent to quadratic Hamiltonian. Hamiltonians, h plus and h minus. So, they are quadratic in the Majorana fermions. So, they are also quadratic in the original spin as fermions, because they are linear related. And, well, they differ only for this matrix, h. This matrix has a block structure. This is why here you see that when I define the indices, I represent the index as, okay, I used different labels, one l and one j. L runs over the, gives us the position of the block, of the two by two block. And j instead is an index that runs inside the block. So, j is between, in this case, I think, zero or one. And l instead can be everything from one to capital N. Okay? This is just how to, just the way to parametrize the elements of a block matrix. So, this matrix is at this point, the structure. So, they can be written, they have a block-circulant or block-anti-circulant structure. So, this means that they can be written like Fourier series, where now the sum is over particular discrete values, which satisfies some quantization conditions. The quantization condition depends on whether the L, the elements of the matrix are invariant under the translation, under shift, or they are anti-periodic under the shift. Okay? And, so you can read, okay. So, you have two different quantization conditions. So, depending on this, on this operator pi z, and you have that, there was a typo yesterday, notice that when the quantization condition for K minus is this one, doesn't have a minus sign here, a plus sign, okay? So, you have the, when you, the sector characterized by this quantization condition is generally called Ramon, sector, just the name, terminology, and this sector instead is generally called Neville-Schott's sector. Okay? Well, you can check, I also show you the matrix and you can see that this two by two matrix here is this one, okay? And the fact that here you have only phases up to the, only the first three coefficients if you want, and you don't see frequencies, higher frequency like e to the two IK and whatever, it related to the fact that here the interaction is between the nearest neighbor spins, and you can expect that if you instead of the easy model, you consider models with the next to nearest neighbor interaction and so on, here you should expect terms with higher other frequencies like e to the two IK for next to, next to nearest neighbors. So, this is our, our two by two matrix, which can be written in this form. I'm using the basis of the power matrices. So, this is just a linear combination of sigma y and sigma x is this. And then, okay, well, this can also be written in a more fancy form. So, like some function of k times sigma y multiplied by an exponential of a poly matrix. Do you know how to see this or not? Are you familiar with the algebra of poly matrices? Maybe, no? We will see it in a moment, but first of all, I would like to emphasize why this approach is so powerful again, because we started, as I told you yesterday with this Hamiltonian with the spin system, two to the capital N degrees of freedom, because of the particular interaction, this particular model, then when you write it in terms of the Majorna fermions, you can distinguish two sectors, but inside the same sector, you have that the Hamiltonian is completely characterized by a matrix, which is two N, an anti-symmetric matrix, which is a two N by two N matrix. But this matrix is not arbitrary. It cannot be as a particular structure relate to the fact that the model is translational invariant. So, because the model is translational invariant, then this matrix can be written in this form. And so, this means that all the information about the model is contained in just a two by two matrix. This is extremely powerful, okay? And it's also extremely exceptional. So, it depends on the particular, on this model. So, you can find a class of models with these properties, but this is an exceptional class of models. So, in other words, if here you put your favorite interaction, in general, you can't do anything like this. You can't solve the model in this way. Okay, so we, okay, let's briefly see what this does. Okay, as a matter of fact, maybe you can just, you could discuss this in the tutorials, I think, these kind of relations. I don't know, because it's not what they're used to now. Okay, this is just another way to write this matrix. And now, okay, we have seen that like for the, like for the chain of harmonic oscillators, the trick to diagonalize the model is to find some operators which are linear combinations of our, in this case, fermions, or my own fermions. Let's solve the equation H, in this case, H plus minus b dot k equal epsilon k b dot k. Why? Because if you are able to find these operators, as I showed you yesterday, you can construct any eigenstate, starting from a given eigenstate of the model by applying operator b dot k or b k. If you apply b dot k, then you increase the energy by epsilon k. If you apply b k, you decrease the energy by epsilon k. So if you are able to find this operator, then you're able to diagonalize the Hamiltonian. So this operator, we brought them as linear combination of the L from one to some L from one to capital N of some protrusions which we called b. Is it right, capital B? Capital B? Capital B. Okay, so we have something like this, and we have a L, very much. Okay. Then, okay, when you, so we have to plug that, we plug this expression for H plus minus here, and then we realize that using the algebra of the Majorna fermions, you can show that this is equivalent to the required that H plus minus applied to the vector b. Okay, we indicated this as b k scalar a, where a is the vector of the Majorna fermions, and we found that H, we have to solve the equation H plus minus b k equal epsilon k beta k. Okay. Then, okay, we're using this structure, then because this is a block-circular matrix, the eigenvalues and eigenvectors are known. Actually, we have an answer, if you want, for the eigenvectors, and the answer is this one, is b k should have the form, proportional to, okay, the element 2L minus 1 plus j of this vector is equal to e to the i, and then there is a small vector, v, which depends on k, j. Yeah, the indices, L goes from 1 and j goes from 0, can be 0 or 1. So we use this guess, these ansatz for the eigenvectors, and we checked that indeed if you plug these ansatz here, you obtain that they are eigenvectors of this matrix H plus minus, and provided that v satisfies any question. Okay. And then we found that these are eigenvectors if the matrix there, okay, applied to this small vector v, okay, is equal to epsilon, okay, v of k. You obtain this, and I repeat, this is amazing, because we reduce the problem, a very complex problem to find the eigenvalues and eigenvectors of a two by two matrix. Amazing. Okay, so what about the eigenvectors and eigenvalues of these matrix? It is actually convenient to write the matrix in this form. Yeah, yes. From vk, you get v, capital B, from capital B, you get, okay, that's v. You get v. And then from epsilon k, you get the spectrum. The sectors. Exactly. This is for H plus minus. So they have this nice structure. But then, okay, posteriorly we must be able to somehow to identify, they correct the subset of the eigenvalues and eigenvectors that actually are also eigenvectors, eigenvalues of the quantumism model. Yeah. By the way, we won't do the exact calculation. I already told you more or less of this structure. So depending on whether H is more than one or H is larger than one, you find that the ground state of the model can be for H larger than one is in the in the Borschvald sector. Actually it's always in this sector, the ground state, but the, but if you take the thermodynamic limit for H larger than one, then it remains in this sector. If you said that you, you have H smaller than one, the ground state becomes a linear combination in superposition, quantum superposition of the ground state of the two sectors. Okay. The only, this is where the sectors play some role as a matter of fact. There are no other effects in this, in this problem. Okay. So what about the solution to this question? Well, if you want, you can, you can solve this two by two question. Choose your favorite method. I choose mine. Okay. My method is to write E in this way. So yeah, this is minus epsilon of K times sigma y e to the I, that is I or minus I. Minus, minus I theta K sigma Z. This is our E. One can notice actually that because sigma Z anticommute with sigma y, then you can write this minus epsilon of K e to the I, theta K over two, sigma Z times sigma Y e to the minus I, theta K over two, sigma Z. In other words, I'm just, I'm saying that this E is nothing but okay, there is some function here, and then there is some spin which is in a given direction given by this angle. So we are rotating the angle. In other words, this means that the eigenvalues of this operator are actually equal to plus or minus one. They are the same eigenvalues of the, of the Pauli matrices. So what about the, the eigenvector and the eigenvectors? Well, we, we know the eigenvectors of sigma Y, of sigma Y is this matrix. Okay. So the eigenvector of sigma Y are one over square root of two, one I, one over square root of two, one minus I, and this as eigenvalues plus one, and this as eigenvalues minus one. Okay. These are the eigenvectors of sigma Y. We can use this to write the eigenvectors of this operator. And then we find that E applied to, E applied to E to the minus I, theta K over two, sigma Z. Did I, did I? Yes. Times eigenvector of sigma Y is equal to one. This is equal to E to the minus I, theta K over two, sigma Z applied to the eigenvector of Y. And instead, if you consider the other eigenvector, you find the minus sign, yeah? Okay. Maybe I can write this. Ah, no. This is what you find. So the eigenvectors of these two by two matrices are just given by this matrix E applied to the eigenvector of sigma Y, with a plus sign or minus sign. Yeah. Just a compact form to write the eigenvector. I don't want to solve this two by two problem. You can. Okay. So this is just a matrix. This is a two by two matrix. No. This is a matrix applied to the eigenvector. It's a vector. Oh, I can buy it. If you want, okay, this, what does it mean? This means because sigma Z is diagonal, no? One minus one. So this is nothing but E to the minus I, theta k over two. E to the minus to the plus I, theta k over two. Apply to this vector. So one over the square root of two, one I. This is what? What? I think yeah. This is one over the square root of two. E to the minus I, theta k over two. E to the I, time I, theta k over two. First plus minus one. Okay. Okay. This is kind of just a trick. A simple way to write the the eigenvalues, eigenvectors, you can compute them in this way. Sigma Y is nothing but this point, right? Sigma Y is nothing but no, no, no, this one. Okay, so now we need the eigenvector, the eigenvalues, okay, and the eigenvectors of this matrix C, such that this epsilon k should be positive because when we we look for the solution, non-negative. Because we wanted this should, should we increase the energy when you apply a creation over it? So here we have to pick the right eigenvectors, eigenvector, which is, which one is it? So we, this metric was, was defined this way. Epsilon k was positive. As I defined, it's a square root of one plus H squared minus two H cosine k. So this is positive. In order for this eigenvalue, the eigenvalues to be positive, we, we need to consider the eigenvalue of this, the negative eigenvalue of this operator because we have a minus sign here. Okay. So this means that the, this is the right, the right eigenvector that we have to consider in order for, not to construct the operator, the operator beat up. Yes. Okay. E is defined as you wrote. Okay. And there is a minus sign here. This is a positive function. And we are looking for positive eigenvalues. So this means that the positive eigenvalues of E should be, should correspond to the negative eigenvalue of the, of the matrix, of this matrix. And the negative eigenvalue of this matrix is this one. So, what we find is that V of k is equal to minus E to the minus i theta k over 2 sigma z times this. That's right. That's crazy. What is this? This one is for root of 2 times E to the minus i theta k over 2. And here you have minus i e to the i theta k. Oh, sorry about why. E to the, e to the i theta k over 2 minus and here why there isn't much. Here we have minus i e to the i theta k over 2. We know Vk. And notice that here, okay, I'm just writing V of k independent of whether we are considering one sector of the other because the structure is identical of the result. So we, then we have just to plug the correct k inside this expression. Generally, when you, when you deal with exponential matrices, you have to expand them, right? Now, when you expand this kind of exponential where you have Pauli matrices, you use that the Pauli matrices square is equal to the identity. So here you have just to write the exponential like a cost, cost actually is a exponential of i. So cost cosine of this minus i sine of the same. Now, the part is even. So it's proportional to the identity. Then there is the same part which is proportional to sigma z. So when you, when you move this operator on the left hand side of Sima y, you only change the, the, the, the part proportional to sigma z to change the sign which corresponds to change the sign of the decay. So you can always move this operator on the right on the left by changing the sign. Okay. This is our vector. So from the vector from B, we, we get B, capital B, capital B we get beta. Let's write the result. What is the final result is, this is what you find beta. It's equal to, this is just the normalization because, okay, you see that everything here all the equation are invariant under multiplication of some constant. And now the normalization that I chose is such that B that K B q is equal to 2 is equal to delta K q. So the normalization I choose is, is related to the fact that here I'm putting 1 in order to have the standard anti-commitation relation. So you have this which is normalization, normalization, and then I have some, I find some over there, check it because it's always possible to have type of e to the i l k e to the minus i theta k over 2 a to l minus 1 minus i e to the i theta k over 2 a to l divided by square root of 2. And the existential energies are exactly the equal to the function that I wrote before. This one, 1 plus h square minus 2h cosine of p. This is what you find when you, when you plug B inside B, B inside B Doug and you find this. So we, yes, in the y direction ah, here, you mean? I mean, we have this, which we have an interaction with the no, no, it's not related to that. No, it's not for that reason. And this y, if you want, is related to the, is more related to the anti-computation relations than anything else. It's just because sigma y is anti-Symmetric. All for that. It's not related to the interaction, etc. Yes. Okay. This will be fine. We can invert this relation. It's easy. You just, you have to consider the inverse for a series. So you have that A 12 minus 1 is equal to 1 over square root of N sum over k of e to the minus ilk e to the i k over 2 beta k plus b minus k over square root of 2 and A 12 is equal to 1 over square root of N sum over k of e to the minus ilk e to the minus i tk over 2 minus b minus k over square root of 2. This is the inverse of A 12 minus 1. Yes. Okay. What happens if you if you now come back to our Hamiltonian. Okay. And we'll write the Hamiltonian in terms of the of these operators, I think that you you won't be surprised to to find out h plus minus is written as the sum over the momentum in the particular sector of epsilon k plus minus times b that k okay, b k plus minus minus in the end we have been able to decouple to find fermions such that the Hamiltonian is completely decoupled in this mode and and indeed you can check that given this Hamiltonian if you compute the commutator between h plus minus and b that k plus minus defining the epsilon of k plus minus times b immediately should be a calculation using the algebra using just using b that k b q because this q so this is our discussion relation okay so let's see how it looks like for example let's assume that we are now h larger than 1 so how is this was the shape of this dispersion relation so for 0 it is equal to h squared minus 1 for k equal to 0 no, what am I saying a is equal to h minus sorry, sorry the absolute value of h minus 1 then for k equal to pi this is h plus 1 it is not super because h is larger than 1 so we have h minus 1 here we have h plus 1 this is symmetric about about the transformation of k minus k this is matrix about 0 and so we have this is our dispersion relation for h larger than 1 more or less so what this is that this is a bit too big this is the case h larger than 1 then let's consider the case h is more than 1 okay so what do we have if h is more than 1 for k equal to 0 equal to 1 minus h for k equal to pi it's equal to h plus 1 they are very similar there is a reason why they are so similar and the reason is that there exists a dual transformation that maps h smaller than 1 in h larger than 1 so if you want you can define some operator instead of the of this instead of the matrices if you want you can define other matrices with the same properties which are called disordered operators whose effect is to invert h so if you start from h then in the new base the return becomes 1 over h I'm just telling you I don't want to show you this but there is a reason why this these two specializations look very similar to each other okay I consider the case h larger than 1 h smaller than 1 what? we there is one case that we we didn't we didn't treat and it's the case h equal to 1 so what happens if h is equal to 1 for h equal to 1 if you replace here h pi 1 you find square root of 2 1 minus cosine k and this is equal to twice sine k over 2 it's funny that we found the same special relation as the the chain of harmonic the couple of chain the couple of harmonic which was there so what is the what is the plot of the expansion for k equal to 0 is equal to 0 for k equal to pi this is equal to 1 so it becomes equal to 2 so there is a well there is a big difference between these two dispersion dispersion relation when h is different from 1 or h is equal to 1 so we know this that here as you there are excitation with 0 energy h is the magnetic field yes so when the magnetic field is equal to moreover ok you can see that this dispersion relation is not it's not analytic at k equals 0 analytic I mean that smooth so that the derivative of are continuous you see that here the first derivative is discontinuous for expansion so you see that there are dispersion issues you can realize that there are some pathologies and indeed this corresponds to the quantum phase transition the point where there is a phase transition from the paramagnetic to a thermo-magnetic phase so I don't know if you what I told you before you can find a mapping between one and the other but you have to introduce some non-local operators so it's a non-local mapping the disorder they are different physically but mathematically there is a there is a map so you and okay this is equivalent to the to the okay I don't know if you're familiar with quantum phase transition without doubt you you had about classical phase transition and you you know that when you are close to the to the critical temperature then correlation correlation function generally decay as power law if you have a continuous phase transition your second order phase transition and you find that that the physical quantity satisfies some scaling relations and so on in the quantum case you have a very similar situation so again this is a second-order phase transition and I tell you in a second there was a difference in the quantum domain within second-order so but anyway it's a second-order phase transition the correlation length diverges like in the classical case so if you have a divergent correlation length it means that your correlation function don't decay exponential but decay as power law so if you if you compute correlations here which means if you compute for example expectation value of operator like this in the limit of large M is decay exponentially within so this behave like e to the minus n over some correlation length here and xi is different is smaller than infinity for h different from 1 but now when is that h is equal to 1 you can see the same object now xi becomes diverges and so this means that here once plays some role corrections and you find that it as a power law and this also for other also other correlations yes the other parameter of this model okay you could tell me because okay you you had to find some operator okay first of all there is another parameter okay simple and then you should find an operator whose expectation in the ferromagnetic phase yet of which magnetization in which direction okay y seems to y y x direction x direction because we know that in the limit the h goes to infinity okay we know that for h is larger than 1 the ground state is symmetric under the spin flip so if you were ever under spin because the Hamiltonian commutes with a pi z okay but now in the ferromagnetic phase you have to consider the linear combination of the two ground states so you don't the ground state is not symmetric anymore about this transformation and we know that for h equal to 0 the ground state is just a super position is just given by all spin pointing the x direction and you know that if you compute the expectation value of c by x you find either plus 1 or minus 1 for h smaller for h equal to 0 instead if you compute h larger than 1 by symmetry is equal to 0 so indeed you're right if you there is another what I'm saying is that here this is the the phase diagram the model h equal to 1 and now let's consider the expectation value of a sigma x what we find we see we have seen the first day that we start talking about this that here the expectation value of a c max for h equal to 0 1 or minus 1 then you instead for h larger than 1 because the symmetry the the ground state is symmetric is equal to 0 and this is this represents the fact that the symmetry can be broken in one of the two direction can be positive or negative this is our order parameter computing this the other parameter is one of the I will say the goals in this in this model if you consider the ground state properties the because it's the only case when essentially when you have to to consider that the model has two sectors okay you cannot just ignore that there are two sectors this is the complication in it if you want to compute the the ground state of the model is as I told you yesterday is the the two ground states in the two sectors okay so we have something like this ground state in the in the one sector plus or minus the ground state in the in the this is our ground state of the rising model for H smaller than 1 so now let's assume we want to compute the expectation value of C my X what do you have to do you have to compute the expectation value in the state ground state of plus minus ground state of the much but by the score root of 2 the other is the operator C my X here some position L same then here we have ground state Ramon sector plus minus ground state the much but by the score of 2 so this is the quantity that we would like to compute now when you are in the same sector this operator is odd under the symmetry if you apply by Z sigma X by Z you find it's equal to minus sigma X this is a odd under the symmetry this is why it's a good order parameter and then when you when you expand this expression you find that this the contribution from the from the same sector is equal to 0 and then the only zero contribution come from the from the overlap between one sector between the states in one sector and in the other so in the end this is equal to plus minus ground state sigma X ground state Ramon this is you can imagine that this is more complicated because okay we have seen that everything is kind of simple when you remain in the same sector you can apply a mixed theorem like for bosons and you are able to compute easily correlations this is the the kind of operators that are very hard to be computed because you have to you have to consider the presence of two sectors okay but now we want to use some physics here to compute for example this the expectation value to rewrite the expectation value in a simple way okay because as a matter of fact so far you don't know you have no idea of how to compute this kind of overlap right I would have yeah it would be even very hard for me to compute this kind of overlap without without using any trick yeah so how can we do this so we use the physical a physical argument so let's assume we want to compute an interest in the expectation value of one operator sitting on a point on a side L etc now physically you should expect for what I called yes the cluster decomposition properties that if I compute this expectation value and times an expectation value of the same operator very far away from here then this should be should become factorized when the distance becomes sufficiently large so what I mean that physically you should expect expectation value in the ground state of sigma Lx sigma L plus n x ground state in the limit n goes to infinity should approach expectation value of sigma Lx ground state squared using cluster decomposition but now we are happy why because this operator here is even under the transformation by z by z sigma Lx sigma L plus nx by z now it's equal plus sigma Lx sigma L plus nx so this means that this expectation value can be computed in the given sector in the Ramon on the vojbal sector and I can tell you that every time that you have this situation in the thermodynamic limit there is no distinction between the two sectors you always find the same result so as a matter of fact you can this this expectation value is equal to the expectation value of sigma Lx sigma L plus nx in the ground state of for example the Ramon and the vojbal is independent it's completely irrelevant so you can compute this you can forget about the distracted the fact that there is a complicated there are two sectors and whatever you can forget about the fact that the Hamiltonian of this model is not the quadratic Hamiltonian because when you consider an operator like this with this property then in fact you can compute the expectation values in the in the state in the ground state of the of the sectors of Ramon of the quadratic Hamiltonians why can you do this okay this physical you get okay now why this equal to this you don't get the differential so we have to do the same now with sigma x sigma L plus n x right this operator is invariant under by commutes with a by z so now if you consider the expectation value of this between this sector and the other sector because they differ for the the difference between the two sectors the expectation value of this by z so here I could put the by z on the on the left okay and if I put the by z on the left I obtain plus one from here or yes plus one yes if I put on the right I obtain minus one from the other one okay but I know that this are the expectation value in the two sectors so from this especially you see that you should consider the average the sum of the two but I'm telling you that I give the exactly the same result in the thermodynamic limit okay so the only in the all the stretch the complication of this in model are just to understand the physics in the ground state then all the sectors are irrelevant for all the that about the ground state properties otherwise I would have skipped it complete questions yes no there is not all the especially because this action is a is a property of the quadratic amiltonians instead here you see that the this is the fact that the model is paramagnetic paramagnetic is related to the sectors the notion of the specializations very similar to each other so it's there is no way okay so okay this I well I I made a lot of claim saying that it's easy to to compute this kind of correlations why is the case you're wondering why is it it is so simple to compute spin spin correlations in the and the reason is that we can okay I give you five minutes yes just like with the classical correlations we are we're not talking about quantum correlations no these are about the big theorem for bosons you remember that allowed us to rewrite the expectation values of this product of linear combination of bosons bosons in terms of the expectation values of pair of bosons that is something similar for the for fermions actually there is something even more powerful for fermions because now we when you start playing with this kind of expectation values then you realize that you can usually recast them in terms of determinants and we know a lot of information started the asymptotic behavior of determinants and so so we for the minutes that we the the big theorem in this in this problem is extremely useful and allow us to to obtain really exact results also in the thermodynamic limit also for collection function of operator far away from each other okay so we map the Hamiltonian basic model h plus minus in this Hamilton and now we the model is defined in a chain so the observables are the spins and so in general we like to compute expectation values of observable life this one sigma z which is the magnetization the z direction or we could be interested in the collection between sigma and z in the transverse direction of the operating different position then we could compute what is called the connected correlation so we subtract okay it's just interesting this we could be interested in observable like this as we discussed before it could be interesting in observable like this now let's we are seen that we can always somehow reduce our problem of the of the computation of the expectation value to the expectation of the expectation value in the ground state of one of the two Hamiltonian Hamiltonian quadratic Hamiltonian so let's just assume that we we are already in the ground state of the Hamiltonian in one of the two sectors Ramon or neighbor Schwarz and then let's see how to compute these expectation values okay let's start from this you already seen if we are in one of the two sectors so these are the sectors of the non-interacting Hamiltonian so h plus minus let's write this plus minus this potential value on this on the ground state of the Hamiltonian plus minus this is equal to zero by symmetry I show you before how to do this so you you plug by by z here and so you realize that the you you plug by z here and here and then you pair this hold under this transformation so you take a sign minus one but when you apply by z to the state you obtain either plus or minus one so you you find that the the expectation value is equal to minus itself that is to say it is equal to 60 okay we found this what about this okay this we will try to see how to compute this now let's start with this one so the first the first thing that we have to do because we express everything in terms of Majorana fermions and then finally this so for the bogey bogey bogey bogey bogey bogey bogey so the the first step is to is to arrive the spin operators in terms of the Majorana fermions we know we already we remember the transformation so this is equal to the expectation value of I A 12 A 12 minus 1 in particular now I I wrote before the I expressed before the Majorana fermions A 12 and A 12 minus 1 in terms of the of the bogey bogey bogey bogey fermions remember it I did it when I inverted the relation so in order to to compute this expectation value you should arrive A B Dug and B using the expression why do you do that because in the end you find something because there will be some linear combination and you will have operator B Dug plus something BK multiplied by some sum again some B Q BQ this B for the for the chain of harmonics letter when we apply the distraction operator to the vacuum we find 0 so this part will go to 0 again this part we will annihilate the state here so you have to compute the expectation value of something like BK BDug Q you use the anti-commodation relations of the anti-commodation between BK and BQ minus the expectation value of BDug Q BK the minus sign is because of anti-commodation relations B here BK BDug Q ah here then this is equal to delta KQ and the expectation value or the coefficient yeah there is some coefficient minus this is equal to 0 because BK and an entity they value so what I mean is that every time that we are just two major affirmants what we have to do you have to write major affirmants that will be the K of BK and I gave you I gave you the relations then you just compute a similarly simple way that you find some some that is exactly what we did for the for the chain of harmonics selectors okay and well what about this kind of operators this is sigma Z sigma L plus N Z so using we we express again the sigma Z in terms of the major affirmants now we have four major affirmants plus the minor affirmants that will be the K then we have to move again all the the distraction operator to the right the creation operator to the left this is always the procedure and we obtain the result but it start being complicated because with four operators start being a mess and what about this operator let's let's write this in terms of do you remember pi the sigma X was equal to the product of J of J from one to L minus one of sigma JZ times sigma times A minor affirmants to L minus one then I write this operator is a product J from one to L plus N minus one of sigma JZ A to L plus to N I write this operator now I move this on the on the left and I get the minus sign so here yeah some J from one from J from one to plus N minus one okay actually it's better on the on the right since J plus from L okay so what we find I express a sigma Z in terms of my own affirmants then I find A to L minus one product J equal L plus N minus one I A to J A to J minus one A to L plus to N minus one so you see there is a difference between this operator and this operator so they look very similar to each other okay so still are two point functions these are lying lying on different different sides you can see this kind of correlation function so the only difference here you see we just consider different directions you have Z your heads but now when you write them in terms of the of the fermions you find that here this is just the expectation value over four fermions here is the expectation value of a larger number of fermions number of fermions that depends on the distance yeah so you see that while in this case if I give you 20 minutes maybe you can compute this expectation value using the just going back to the variable beta K BK if I ask you to compute this for N equal 20 I think that you are not able to or you are religion depends on you I doubt that you can you can you can do this in a direct way okay how but you see that this is just a physical correlation punch and we would like to not to to compute because as we are able to compute the two-point punch in the Z direction we would like to compute the direction punch in the X direction moreover we know that in order to compute this in the grand state of this model we need to know the expectation value of this correlation in the limit n goes to infinity we have a major and a fermions so how do we do this likely the amiltonis quadratic is a it's also called the non-interacting non-interacting means again that you can express all the expectation values of a string arbitrary long string of fermions in terms of the expectation values of thirds of them and this is called again the big theorem so but we we need to is the big theorem now is slightly different from the bosonic case just because we have to take into account the symmetry the different commutation relations of fermions now they they anti-commute so what is the theorem the theorem tells us that if you have redefined operators AJ there are linear combinations of now for example D capital C J N C N D J N C that N so these are fermions C that L is equal to delta N and C N C L is equal to 0 you have fermions and now an interesting computing the expectation value on the vacuum of the of the product from J from 1 to 2 W of AJ this is equal to the sum of minus 1 to the p from 1 to the bosonic case J W not sorry IW JW smaller than JN P is the permutation O or is N yes to N the sequence takes to the sequence I1 J1 I N JN maybe this is W okay this is the theorem now let's just understand what does it mean because it's easier to see how to do how to apply it to understand what I wrote there so again do you remember what we did with the with the bosons so we had some expression like A A A A1 A2 A3 A4 okay anyway and and we express this okay we have something like this A3 A4 A5 A so on and so on and and they basically meant that you can write this as the expectation value of the pair A1 A2 times the rest plus A1 A3 times the rest plus A1 A4 times the rest and so on this was the the bosonic version of the big theorem now what does it change well fortunately it's very easy to to to take into account the symmetry of the of the operator the anti-symmetry of the operator because we just have to change the sign in this expression alternatively so if you have the first is plus the second is minus this is plus and so on so for example A1 and so on and then the idea is to apply this results that you you keep applying this result and then you find the solution okay written this way in this way you just say okay fine it's it's like in the bosonic case so how can we deal with a large number of operator it's still a very a terrible expression it is not so terrible because when you realize of a matrix but we don't do that what we what I tell you is that if you consider the square of this string of this of operator like this the square of this can be written as a determinant is the determinant of the matrix A1 A2 A1 A3 A1 A4 and so on and here you have A1 A2 1 2 just 1 2 only the square is the determinant yes A2 3 and so on and here you you consider the minus itself so you want to construct an anti-symmetric matrix so this should be an anti-symmetric matrix minus the same here and the after try angular part is given by the separators or according to in this way so 1, 2, 1, 3, 1, 4 2, 3, 2, 4 and so on ok you consider this matrix you take the determinant and this is equal to the square of this if you said you are interested in the sign so you you want to compute this just know that the correct mathematical quantity to compute this is called the it's called it's called it's called which is defined only for anti-symmetric but ok this is not important because generally it's enough to compute the square of a correlations so not so important the sign ok but now what is nice here is that we we are recasting everything in terms of the determinant of a matrix and but then we generally what happens is that when you compute this kind of expectation values there is always some symmetry in this kind of matrix so this is generally a structure matrix so matrices are studied by mathematicians and so we are able to to compute for example in some simple case the limit limit values also is the number of operators is large and approaches infinity ok this is correct in particular ok yeah we could in principle we could apply this we could find a big theorem for the computation of this expectation value of sigma 1x sigma 1 plus nx for the order parameter and then what do we find we find this matrix is very particular is a very well known matrix called either is a top maybe is a block top of this matrix now is important what is that but what is important is that the limit of the determinant n goes to infinity of the term is known for this kind of structure matrices and you realize this approach is 0 h larger than 1 you apply the same mathematical result in the case in the case h is more than 1 and you find something which is different from 0 so this is how mathematical you prove that the symmetry is broken because now we found two operators that in the limit of infinity distance do not decoupled do not decoupled are such that the correlation remains finite so this means that the special value of the single operator should be different from 0 but this operator was hot under the symmetry so we are in this way you prove that the symmetry is broken in the ground state ok this is just to tell you how can you compute correlation in the easy but we we don't have time to compute correlations and we will do it in the in the dynamical case we will see we will apply these results in some simple cases when we consider dynamics in easy ok we have a few minutes and I want to spend these few minutes to go back to something that we we consider first lecture so about the entropy remember whatever density matrices because I I wanted something I want something to be clear so we we now know how this define the subsystem in terms of spin so we just say ok our subsystem subsystem consists of the spins in a given region and so on now here we are fermions so we wrote the fermions for example but we could still ask questions like for the spin so let's assume that we we define our subsystem as the subsystem consisting of the fermions with a particular momenta or particular position position in the fermionic space ok so what I mean with this so I'm saying we could for example consider the ground state of for example we we could consider a space spanned by vectors which can be by fermions that can be two different states states one and state two so let's assume that we are just two fermions see that one and see one two fermions ok so what is the space for this kind of fermions the focus space well we are we can go start the vacuum ok which is defined as a c1 and 2 applied to the vacuum is equal to 0 then we can define all the state c.1 c.2 0 and c.1 c.2 0 so this is a four-dimensional space to fermions ok so I could ask you for example what is the density matrix of the of the fermions in the state one in the state side and ask you which is the density matrix corresponding to the state of the fermions one what do we have to do there are for example two ways I already discussed the two ways one is computing the expectation value of all the operator that you can construct with the fermionic operator in the state one which are you can construct c.1 you can construct c.1 c.1 or the identity one so all this operator acts non-trivial only in the state on the state one ok so you could for example given a side I want to reduce density matrix corresponding to the state one so you could write this as a sum of you could write as a sum of you have always the density matrix as trace equal to one then this will be a linear combination of these operators because then the density matrix is an operator in this in this space so in general should have would be a linear combination lambda let's call the lambda one c.1 plus lambda two c.1 plus lambda three c.1 c.1 so this is termed as confusion but just compute the expectation value of the various operator is the way so the expectation value of c.1 you compute it in this state so you have to compute quantity trace of c.1 over four ok you will find some c.1 over four plus lambda one c.1 squared plus lambda two c.1 c.1 c.1 this is the expectation value of c.1 for given density matrix ok but ok what is the how can you compute these traces now first of all here we have c.1 squared how much is it c.0 because you cannot have two fermions in the same state it comes from the algebra of the fermions so you realize there is no contribution here now we have to compute this one about this what do we have we could use the commutation anti commutation so this is equal to c.1 minus c.0 so this is just c.1 let me be here didn't see it so this is equal to the anti commutator between c.1 c.1 minus c.1 c.1 but now c.1 dagger squared is equal to 0 so all of this is contribution this is equal to 1 so in the end I can just write this as sigma c.1 so this is equal to the trace of c.1 1 multiplied by 1 plus lambda 3 plus lambda 2 c.1 c.1 over 4 ok we still have this problem now how to compute the trace of c.1 c.1 so the idea generally is you have to write this in some basis to compute the trace this is our basis so in principle you should compute the expectation value of these operators in this basis and then you compute the trace by summing all the result but the result is easy and we can guess the result because we remember there has been a transformation if you want so we know that every time if we consider inverse transformation c.1 corresponds to sigma plus so it can be represented by matrix sigma plus sigma plus which is sigma x plus i sigma y over 2 and this operator c.1 c.1 c.1 c.1 c.1 c.1 c.1 c.1 c.1 0 so this operator is equal to 1 if there is no fermion and it's equal to 0 if there is a fermion so this operator c.1 c.1 can be represented by we have said if there is there is no fermion is equal to 1 and is there there is a fermion is equal to 0 so the trace of this operator On the other hand, the trace of this operator, because it's written in terms of the polymetic sigma y, and not the number of polymetic, is equal to zero. So finally, if you compute this, you find that it's equal to lambda 2 over 4. For example, oh, sorry, there is no over 4, over, oh, yes, over, over, or something like this. Anyway, what I mean is that the idea, you can apply the same ideas that we discussed for the spin case. Now, when we talk about fermions, you have to construct your basis of fermions. And then the easiest way is to represent your operator on this basis, to find the representation in terms of polymetic, for example. And then you can easily compute all the traces, and then you are constructed in the matrix. Analogously, you can trace over the external values of freedom, but the calculation is essentially the same. You must be able to trace over, to compute the, to write a complete, a complete basis of the, of the environment, of the rest of the system, and then you compute all these, the partial values. Maybe this is not now clear, but I, I hope that it will be. And you will see this also in the tutorials, I think, really, so that you, you will be able to, to compute the, and the matrix is also in the spot space, also when we, we are dealing with fermions, and not just pins. Okay. Yeah. Just there. Okay. Yeah. Okay. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.