 Good morning. Today, I'm going to talk about gauge fields and entanglement entropy, but I would like to make a very short comment regarding a question yesterday during the discussion session. Remember, yesterday we have discussed two different methods to calculate entanglement entropy. So in one of them, we get an expression for the entanglement entropy in terms of correlators. And on the other hand, we get a result for the entanglement entropy in terms of a partition function in a single plane, but for a field with these particular phases when cross the cut. So the question yesterday was, as both methods can be applied to free fields, there must be a way to connect the two results. And the answer is yes, but the way to connect them, I mean, you need to finish the calculation. Yesterday, I gave you only the partial result. I mean, one way is, of course, you arrive to the end of the calculation and you see that the answer match. This is one way, but it's not a proof that there's really a relation between the two methods. The other way is with calculating the partition function for this special case of fields with these phases of the boundary in the plane to do the explicit calculation of the partition function to plug this result into the formula for the entanglement entropy and to see that you obtain the expression of the entanglement entropy in terms of correlators. So I'm not going to give you the details, but at least the intermediate states, the result of the intermediate steps so you can then complete the derivation. So the idea is that these phases for the field can be mimic introducing a gauge field. So you couple your field to an external gauge field that is pure gauge everywhere except at the boundary. This is like attaching a vortex line in the boundary so the field acquires these phases when crossing the boundary. So the action you have is the one of a free, in the case of fermions, the one of a free fermion plus some interaction term. And then you have to calculate the partition function for this action in the Euclidean space. The calculation of the partition function, there's a method to calculate the partition function. I mean, you need to introduce an auxiliary field. This is something that people use, for example, when calculating Casimir effect, when you have to introduce a constraint in your theory, you can do it just adding auxiliary fields and then integrating the auxiliary fields in order to realize the constraint. In the case of fermions, these auxiliary fields are going to be Grasman variables, the usual stuff. Also, when you want to apply a gauge fixing, you do the same. You introduce auxiliary fields. So the result for the partition function, what you get is, now I tell you what is lambda, just 1 plus 1. And this c is the square root we're talking about yesterday. So this is the result for the partition function, lambda are just the phases. I don't remember the coefficients, but it was something like, or perhaps I have written here, the coefficients. Well, no, I don't have it here, but are the roots of the unity. It's different from scholars than from fermions. So once you have this, of course, there's a middle step because you have an expression for the traces of the powers of the density matrix. And then you have to find the analytic continuation in order to take the n going to 1 limit. And expression for the entropy or the region b is just the real writing one root. This is for scalars and for fermions. What you get is, but here there's a minus 1 for fermions. So then if you plug this expression of the partition function in this expression here, yes, lambda are the phases here in this expression here. The point is that that's why we should discuss the analytic continuation. You transform the sum, do you remember yesterday we saw that at the end what you get is a sum over these phases. There was a 1 over 1 minus n and the sum over k. Goes from 0 to n minus 1. These sums can be done with a contour integral. This is the way you build the analytic continuation. And that's why at the end what you get is an integration over lambda. So if you plug this formula here, what you get is the expression of the entanglement entropy in terms of the correlator, the same expression we get from the real-time approach. This is the way that you can connect both methods. I believe there's one missing formula. Because this is for fermions, so you have to plug this formula here. You can do the same thing for scalars. So this is the end of the connection between the two methods. So let me start now discussing gauge fields. There's no volume. The region. Yes, the point is that these correlators here are restricted to your region. Yes. Yes. Yes. Yes, yes, yes. It's just the analytic continuation. Yes. I'm not saying how many intervals I have here. This is a general expression. The point is, because I'm not solving to the end, that's why I was saying one way is to solve to the end. So you find the entanglement entropy, you define your region, and you find the entanglement entropy with one method and the other one, and you compare the results. This is not what I'm doing here. I'm just taking the general expression for the partition function. I'm not saying I have one, two, or three intervals. No, the problem is that for fermions, I know how to calculate this partition function in terms of what result is that this gauge field you introduce, I mean these vortex lines you introduce in order to mimic the phases, then the fermionic current can be bosonized, and the interaction term takes a very easy expression. And what you get is two-point functions of vertex operators in a new scalar bosonic theory. So and you know how to calculate correlators of vertex operators. And this can be done for multi-component sense. The case of the scalar field, you don't know what you get. There are two things. For fermions, if you add the mass instead of finishing in a free bosonic theory, you end in a sign-gordon theory. So you have to evaluate two-point correlations of vertex operators, but in a sign-gordon theory. What results is differential equation. It's a Pine-Levé type 5 that you can solve, but only for one interval. If you put a multi-component region, then you don't know how to solve the problem. And for the scalar, it's even worse in the sense that even for the massless case, you don't have this tool of bosonization, so you don't know how to solve the problem of a multi-component region. For any conformal field theory, you know one part of the answer, but there's still some part that depends on the cross ratio that you don't know. Only for a free fermion, you know the complete answer. This is the point. OK, so let's see what happens with gauge fields. So the idea is that when we started studying the relation between gauge fields and entanglement entropy, we found that all the results available in the literature were kind of puzzling results. So first, in the context of black holes, you can find the early result of Kabat. He found a negative contribution to the entanglement entropy, known as the contact term. And then later, you can find an exact calculation of the logarithmic coefficient for spherical sets done by Docher. And he again found something which is difficult to understand. The logarithmic coefficient was not given by the Euler anomaly as expected for spherical regions. We have done also the calculation for a Maxwell field. And we also find that the logarithmic coefficient that's not correspond to the Euler anomaly. And also in lattice calculations, people have found problems to define the global Hilbert space as a v-partition. The solution they propose is to extend the lattice. And what they get at the end is that the entropy has two contributions. One, classical contribution coming from the boundary of the region, plus the usual quantum contribution. And they say that the problem in finding properly this v-partition of the Hilbert space is that the excitations in gauge fields are not point-like, but are given by loops. So the idea to study gauge fields is try to understand what makes gauge fields so special regarding entanglement entropy. We decided to focus on lattice gauge fields, abelian gauge fields. And we are going to follow an algebraic approach. Well, this reference is here. Here, wait. This reference, the pointer is this one. OK, this reference is here. The last one is the calculation we have done with Leonardo in the cylinder. So I believe it is the only calculation we have in the cylinder. We have done previously the calculations in the cylinder for scalars and fermions. And again, for the cylinder, we know that the logarithmic coefficient has to be given by the anomaly. But what we found with Leonardo is that, again, in the cylinder, the result is not the anomaly. So if we have time, I would like to show you the results. OK, let's skip the outline. And also, let's skip the definitions and properties of entanglement entropy. But let me just introduce new measures of information that are going to be useful in what I'm going to talk about, the relative entropy and the mutual information. We know the entanglement entropy is divergent. It has a non-universal character in general. All the terms in the expansion are not universal. But this can be solved if you introduce different quantities as mutual information and relative entropy. In the relative entropy, you have only one region, but two states. And the mutual information is a special case of relative entropy that is defined. This is the definition. You need two regions. No, wait. This is the definition. And of course, by construction, these two quantities are well-defined and finite in the continuum. The boundary terms get subtracted. So these are two good quantities in the continuum we are going to use. So let's start defining lattice gauge theories and how to build a gauge invariant operator algebra. So what happens in general, we are used to putting the lattice scalars and fermions. And what you do is to attach to each side of the lattice your operators. In gauge fields, it's a little bit different. The elements of the group, instead of being attached to sides of the lattice, are attached to the links. You have also the gauge transformation variables. Sorry. I'm going to do it many times, but you have also these G variables. These are the gauge transformation variables. This is how you write the gauge transformation law. And in the space of wave functionals, these are functionals of assignations of group of elements to all the links. You can distinguish, you are interested in distinguish the ones that are associated to the physical states. These are the ones which are gauge invariants. And with these ingredients, you can define the algebra of physical operators. The natural thing to do is to associate an operator L to these G variables. This operator acts in this way. OK? And then you can complete the algebra associated in an operator also to these elements U. OK, this is a kind of coordinate operator. And the idea is that with these two operators, we have two commuting algebras, for a billion groups at least, which do not commute to each other. So together, they are a generating set for the algebra. But we are still in the unphysical space. And the reason is that you can show that the U operators are not gauge invariants. And the gauge invariant version of these operators are the Wilson loops. Wait, here. I mean, the L operators are OK. But the U operators are not gauge invariants. The gauge invariant version is given by these Wilson loops. What you have to do is to close a path acting with the U operator. So these are going to be the generators. The L operators and the W operators are going to be your generators of the algebra. The complete algebra is going to be just with a tensor product over the algebras of each link. What is interesting here is that naturally, there appears constraints equations. These are a consequence of the gauge invariants you are asking. So for example, this constraint here over the L operators is a kind of Gauss law. It tells you, gives you constraints to the flux of the electric field. And this constraint here gives you constraints to the flux of the magnetic field. OK? For example, for continuous groups, you can parametrize the use and the L operators using this vector field A. You can rewrite the constraints in more standard way, just to see that this constraint here is related with the way to rewrite the Gauss law. It's the more general way to write the Gauss law. And this tells you the magnetic flux has to be conserved. OK? This is just what we know. But let's see what happens. If we naively, we have a two-dimensional lattice and we say, OK, we are interested in a square region. And you say, OK, which is the natural way to assign these local algebra to assign operators to this region? And OK, the first thing you can think is, OK, I'm going to put all the possible links. I'm going to choose all the possible links and all the possible Wilson loops within the square. OK? As we usually do with scalars, you draw a region and then all the sides inside your region have a touch, a phi, and a pi operator. OK? So you do the same for gauge fields. But in this case, what happens due to the constraints is that once you have these three links within your region, you automatically have also this other link here. This is due to the constraint because the links operators coming to the same site are not independent. OK? So even if you, in the beginning, didn't consider this link here, is already within your algebra. The same happens with these two. OK, the product of these two links is related with the products of these two links outside. So this tells you that in the case of gauge invariant local algebras, you have to take into account constraints in order to do the calculations. Because there are some links or also wills or loops that you are not considering, but are inside your region due to the constraints. And to define better this idea, we have to introduce new concepts. This can be said in a more elegant way. The idea is that these local algebras can have a center. These are a set of operators that lives not only in your algebra, but in the commutant of the algebra. The commutant of the algebra, of course, is the set of operators that commutes with all the operators in the algebra. So let's start with the scalars we are familiar with. So if you consider, again, scalars in a lattice, and you take, let me see, it's easier like this. And OK, let's take scalars in the lattice. You are going to attach a phi and a pi and a momentum operator to each side of the lattice. And you are going to choose, you are going to say that your local algebra is generated by all the phi and pi operators within your region. This is the natural way to define the local algebra associated to your region for a scalar field. In this case, this is one way to choose a local algebra associated to a region. But there's a mathematical way to define the local algebra associated to a region. And this is using this double commutant that I defined here. The idea is given a set of operators and its commutant, the local algebra associated to this set of operators is given by the double commutant. So you calculate the commutant, and then again the commutant of the commutant. This gives you the double commutant. Of course, these two definitions coincide, gives you exactly the same result for the scalar. So this prescription to choose the local algebra is exactly the same as saying that the local algebra is given by the double commutant. In this case, what you are assuming is that the local algebra associated to the complementary region is given by the commutant. This is what you are supposing, what you are assuming. In this case, the only operator that shared the algebra and the commutant of the algebra is the identity operator. So you can interpret this local algebra as a factor in a tensor product. And then it's natural to define your global Hilbert space as a tensor product of these two local algebras. Is it clear in this case, as the only thing that shares the two algebras is the identity, you can introduce a repartition of the global Hilbert space in this way. But this is not the more general case. In general, what happens is that in the intersection, there's not only the identity operator, but a set of operators. This is called the center of the algebra. And in this case, let's skip this part. But in this case, even if it's not possible to define a repartition of the Hilbert space as a tensor product, you can still calculate an entropy associated with the von Neumann entropy associated to the density matrix. This is the way you have to follow. You have to calculate first the density matrix for this local algebra. And then you can calculate the entropy associated to this density matrix, local density matrix. And what you get is this expression here, the entropy has two pieces. H here is the Shannon entropy of classical probability distribution. This is the classical probability distribution for the elements of the center. And you have also the standard quantum contributions. This is something that I mentioned in the beginning. People trying to do lattice calculation with gauge fields, they introduced these extended lattices, as I told you. And they found exactly these two contributions to the entropy. So we can, yes, let me finish. So we can interpret that what happens in their calculation is that they have a center in the way they assign local algebras to regions. Yes, please. The right way to do the calculation is you first calculate a local density matrix associated to a local algebra. This is the correct way. And then this local algebra has to be defined as the double commutant of the set of operators. This is the right way to do things. Yes, the point is that we didn't notice it before because that there was something special in this assignation of local algebras to region because in the case of scalars and fermions, both prescriptions are the same. I mean, if you use as a prescription that you are going to take all the operators that live that are attached to sites inside the region, this prescription is exactly equivalent to take the double commutant. But for gauge fields, it's not the same. As I show you, if you take a square and you take everything that is inside, what you get is something which is outside. And this is due to the constraints. OK? I mean, all the links operators are not independent when they come to the same site, OK? Yes, I mean, you always have a Hilbert space, OK? Even if you are talking about algebras, no, no. I mean, you don't need to connect both languages because they are already connected. I mean, the only thing that is difficult is that in the case that you have a trivial center, you can express the global Hilbert space as a bipartisan using a tensor product. But otherwise, you cannot, but still, that's why, well, I skip the, but still, even when you have a center, it's possible to do partial traces. It's not, you are right, that it's not the standard, OK? You say, OK, now I don't have a tensor product in the Hilbert space. So what is the definition of partial trace in this language? Well, it's possible to restrict your global density matrix to the local algebra A, let's say, OK? I skip, we can see the, I mean, no, the idea, OK, we can spend, well, we can spend some time with this. The idea is you take a basis that diagonalize all the elements in the center. In this basis, also, I mean, this matrix take a block diagonal form. And once you have, I mean, the algebra generated by A and its commutant, this is what I wrote here, it has a block diagonal form. And from here, you can write the density matrix associated with this, and then you partial trace on each block, OK? And you know the answer is the correct because this reduced density matrix gives you the correct expectation values of the operators within the algebra, OK? Yes, yes. Yes, yes, yes, sorry. OK, so once we are here, again, we have to rethink some properties of the entanglement entropy because now we are not talking about entanglement entropy associated to regions. Now we are talking about entanglement entropy associated to algebras, OK? So we have to rewrite the properties in terms of algebras. But one of the properties that I'm going to use is this one here. And remember, we were saying the entropy of the region, if the global state is pure, is the same as the entropy of the complementary region. And the way to say it in terms of algebras is to say that the entropy associated to an algebra and the entropy associated to the commutant of the algebra is the same, OK? OK, well, something is important. Just pay attention to this property here of the mutual information. The mutual information satisfies this inequality. This means that it's increasing with inclusion of algebras. This means that if you add operators to your algebra, the mutual information is bigger, OK? So let's say I have started at 10. No, no, I have still 50 minutes, OK? So let's see how ambiguities appears for the entanglement entropy due to this freedom we have in the assignation of local algebras to regions. So suppose, again, that you are interested in a square region in a two-dimensional lattice. And we have said, OK, the first try you can do is I'm going to take into account everything, OK? I will put all the links and all the Wilson loops. And what happens in this case is that due to the constraints, you have a center. And the center corresponds to all these links here, OK? The commutant of the algebra is this shaded region here, OK? And you can check that these operators, these links operators that live in the center belongs also to the algebra and to the commutant of the algebra. And now suppose you say, OK, no, I want to get rid of the center, so I'm going to start erasing links operators in the boundary. And you arrive in the other extreme to this situation here. We call it magnetic center. If you erase all the links in the boundary, you still have a center. But now the center is the Wilson loop that lives here in the boundary, OK? The commutant of the algebra is, again, this shaded region. And this Wilson loop belongs also to the algebra and to the commutant of the algebra. So in both cases, you have center. In the middle, of course, you have an infinite number of ways to assign the local algebra to the region, to a square, OK? You can erase one link, two, three. I mean, you can choose. And of course, in the middle, there's a way to choose the algebra in such a way that the center is only the identity operator. So you have a trivial center. So let me show you this here. OK, I'm not going to give you the proof, but the idea is that if you start playing with links and assignations of Wilson loops and links, you find that the way to get rid of the center is that on the boundary, I mean, this is the election, the set of links on the boundary whose link operators do not belong to the algebra must be a maximal tree, OK? So this is the way to get rid of the center. This is an example, OK? But again, the solution is not unique. You have many maximal trees you can draw in the boundary. So you have many elections of algebras without center or with trivial center, OK? Still, we have ambiguities. But we will see that all these possible maximal trees that gives you no center have exactly the same entropy. The problem comes when you choose an algebra with a center. These are the ambiguities. OK, this, let's skip also this, what I was saying here is that this fixing of maximal trees on the boundaries are equivalent to a gauge fixing in the lattice. And you can prove that this prescription effectively cuts the degrees of freedom inside and outside. So you are able to write the global Hilbert space as a tensor product, OK? I want to arrive to the examples, but no. OK, let's skip this and this here. For example, what I have a skip is that also in the case of scalars, you can have, if you choose a different prescription for assigning the local algebra to the region, you can have a center. For example, you can choose all the fives and pies within the region and on the boundary, you choose, for example, only the five operators. And then you have a center and you have a contribution, a classical contribution to the entropy, but it's ill-defined in the continuum. OK, I prefer to skip this part. So let's see what happens if we study here, no. If we studied a Maxwell field in two plus one dimensions. So the idea is that we are going to put in the lattice a Maxwell field, OK? But we are going to take a slightly different strategy. We are going to start with the physical, electric, and magnetic field instead of the Wilson loops and links of the operators. The idea is that the electric field is associated to the links and the magnetic field is associated to each plaquette in the lattice, OK? But in two plus one dimensions, we have a duality between the Maxwell field and a massless scalar field. So you can write the electric and magnetic field. This is the discrete version of the duality in terms of the scalars and the conjugate momentum, OK? And so you have a dual lattice. You can, for example, this link here that is associated to a vertical electric field it's going to be given by this link operator here in the dual lattice, which is defined here. This is phi hat. Yes, phi hat 1. It's going to be, I don't see. Well, this is the definition of phi hat 1, and this is the definition of phi hat 2. No, OK. This is phi hat 2, and this is phi hat 1 because vertical operators in one lattice corresponds to perpendicular operators in the dual lattice, OK? And what is easy is this identification. Once you have a magnetic operator in this lattice, you have a pi in the dual lattice. So this is how it looks. These different choices for a square lattice, for a square region, both lattices, OK? These are magnetic fields, and you have all these. You choose all the links with electric fields. So due to constraints, you have also these links here. This is what we call the electric center. In the middle, we have the trivial center. As I told you, you have to choose a maximal tree. You have to erase a maximal tree on the boundary. This is in the Maxwell lattice, and this is what you get in the dual scalar lattice. And the same for the magnetic center, OK? We have erased all the links on the boundary, and this corresponds to this configuration in the dual lattice. And now we know how to do calculations in the lattice using the method we discussed yesterday. So what we have to do is to calculate correlators. And we are going to calculate correlators in these dual lattice, OK? But the problem is that now the setup of the problem is slightly different from the setup I described yesterday. Instead of having the commutation relations are different, here we have a more general situation with a C number here in the commutation relations. But you can fix this problem. I'm not going to give you the details, but still you can express the entropy. This is the quantum part of the entropy. It's given in terms of this quantity theta. It's the analog of the C we have defined yesterday. And it's again the square root of two correlators, but these are slightly different from the ones we defined yesterday because you have to take into account this C that appears in the commutation relations. And also we have an expression. We need an expression for the classical contribution, OK? So this is the way you can write the entropy. Now you can do the calculation. And what you get is this. For example, if you want to calculate the mutual information between two squares, what you get, and this is interesting because for all possible assignations, the electric center, the trivial center, and the magnetic center, you get exactly the same result, OK, in the continuum limit. And you can check that the classical mutual information goes to zero. So mutual information doesn't see if you have or you don't have a center in your algebra. So there's no ambiguities in the mutual information, OK? Now let me show you what happens if we compare the result for the gauge filled with the result for a scalar. And what you get is that always the result for the gauge is bounded by the result of the scalar. And this is consistent with the inequality I showed you before, saying that if you enlarge your algebra, the mutual information is going to be bigger, OK? And this is what happens here because the gauge model is the subalgebra of the scalar model. In the gauge model, you only have the derivatives, OK, of the scalar. And OK, this is for, no way, OK. So let me spend the last five minutes discussing this example. We have done the calculation also in three plus one dimensions. And the idea was to calculate the logarithmic coefficient, OK, of the entanglement entropy. We know that the, or at least we expect, the logarithmic coefficient has to be given by the Euler anomaly. And what we get is something different. OK, the idea here, there's no need to put the theory on the lattice. So this is a little bit different because what you get at the end is that you can rewrite the result for the gauge in terms of scalars. And you already know how to calculate the scalars. You know the coefficients, the logarithmic coefficients for scalars, so you don't need to do the calculation, OK? So you start writing the Hamiltonian in terms of the physical fields, B and E. This is just the discrete version of the Hamiltonian. You end, I mean, you can integrate due to the, we are doing the calculation in a sphere, OK? The region is a sphere. So you can integrate all the angular part and you end with a radial Hamiltonian, a Hamiltonian in one dimension, OK? This is the answer for the Maxwell field. If you do the same for a scalar, the Hamiltonian you get is this one. If you compare both Hamiltonians, what you see is that are exactly the same. You have two copies of the scalars. But what is missing is the zero mode, the L equals zero mode of the scalars. So the sums over L goes in the case of the gauge field. Instead of going from zero, it just starts in one. So there's something missing in the case of the gauge field. But you know the logarithmic coefficient for the scalar, for the complete scalar, and you know the logarithmic coefficient for the zero mode. And what you get is this number here, which is exactly what Docker found in a completely different calculation, mapping the problem in the sphere to the ADS and calculating the thermodynamic entropy. What he gets is this number here, which is not the anomaly, which should be this number here. So what is happening is still open. Of course, we can fix this number. This calculation is done without a center. So we can fix this number choosing a center. And what gives you the proper result if you choose the electric center. And then you will have a classical contribution. For me, it's not clear why we have to choose the electric center and not, I don't know, the magnetic center. Probably there's a physical reason why the assignation of the local algebra, the proper assignation of the local algebra to the region, is given by this particular election of the algebra. But so far, it's not clear to me. OK, I'm not going to have time to discuss the cylinder, but it happens something very similar. You again can rewrite the Hamiltonian. Just one comment. You can ask yourself why I'm writing the Hamiltonian. OK, I just start with the Hamiltonian because I want to calculate then the correlators. Here, I don't need to go on with the calculation because at this stage, I can compare scalars with Maxwell. And I know how to solve scalars. The same happens in the cylinder. You get two copies of a scalars. There's something missing. And the answer for the anomaly is different from the coefficient we get in the calculation. Again, you can fix the problem adding a center. But OK, my conclusion is that so far, it's not clear to me why one assignation is better than the other one.