 Okay. Okay. I will need the glasses, but... For after, top one. Okay. Okay. Open the projector to just press on. Okay. And this is the pointer, back and forward. Okay. Thank you. Okay, nice. Of course, Gerardo and Carmen is probably the answer. She's in Buenos Aires, but she's coming every summer. She's coming to visit us for three months. Okay. So now we start with the second lecture of today. Professor Huerta from Bariloche will start her lectures on Entanglement in QFT. Okay. Thank you. Thank you very much. Hello, everybody. Thanks to the organizers for the invitation. It's a great pleasure to be here in Trieste. I should say to be back. In fact, I was a postdoc at the ICTP many, many years ago, almost 15 years ago. So, in fact, it's a double pleasure because it was at that time that I had started working on the field with Horacio Cassini, the field I'm going to talk about today, Entanglement Entropy in Quantum Field Theory. So, let me write. Big enough? No. In Quantum Field Theory. Okay, yes. I understand we are going to have four lectures. So, we have time to start from the very beginning and to finish with some recent applications. So, let me write the outline, a tentative outline of the talks. The idea starts with basic definitions, properties. We are going to use properties and some calculations, lattice where things are well defined. Then we are going to go to the continuum limit for quantum field theories and entanglement entropy in quantum field theories. I'm going to discuss one method which is replica method to calculate entanglement entropy in general. But I'm going to discuss an example, the free fermion in one plus one dimensions. Okay, and then the last two lectures. I'm not sure about the order, but I would like to discuss what happens with gauge fields. Okay, and also the relation between entanglement entropy and renormalization flow in Quantum Field Theory, three seed theorems in four dimensions. Okay, so this is our idea. As I promised to start from the very beginning, let me start with revising the idea of entropy. Okay, this is, I found this is a concept that has been rediscovered many times along physics history. And each time has gained a wider interpretation or a different interpretation. So at the first place, we find the entropy in the context of thermodynamics. Okay, and we understand the entropy as a measure of disorder. Okay, the entropy enters in the game in the balance between energy and mechanical work. And in fact, it's interesting because the name of the entropy was formed on purpose, very similar to the word energy and also has inside the word transformation. Okay, inside the entropy comes from the Greek from transformation. So we would say that we understand entropy as a measure of disorder, but then we can extend this idea to quantum mechanical systems. And we can say the entropy a new meaning as a measure of uncertainty. We can say they have three different definitions. Let me see. And also from quantum information, we have the interpretation as a measure of information. Okay, information lost in sending a message, for example. So as you see, we have so far three different interpretations for entropy. And we are not done still because we have to add the word entanglement. So the story doesn't finish there. But let me illustrate this idea with a, I like a conversation between Shannon and for Newman. Shannon was looking for a name for the entropy we know now as the Shannon entropy. And he asked for advice for Newman and he bring the conversation they have. It says, I thought of calling it information, but the word was overly used. So I decided to call it uncertainty. And the answer for Newman says, you should call it entropy for two reasons. In the first place, your uncertainty function has been used in statistical mechanics under that name. So it already has a name. And in the second place, and more important, nobody knows what entropy really is. So in a debate, you will always have the advantage. So it's a good way to start. So as I was saying, we have to add still the word entanglement. And for that, we have to add new ingredients to the game. And what we have to think now is in a system that is divided in two parts. For some reason, you have an observer that has access to one of the subsistence. So this idea can be realized doing a partial trace in your density matrix. You describe your state with a density matrix. A partial trace will do the work of this partial access to the observables, to the degrees of freedom that are accessible. And then you calculate the entropy associated with this local density matrix or reduced density matrix. This is known as the entanglement entropy. In general, you will say that this quantity measures the entanglement between the two subsistence. But now we need to add in quantum field theory. So we need to go to the continuum. And let's say subsistence now. You are interested in a region. You have a spatial region, okay, B, that induce in a way partition on the Hilbert space. And your state, global state is described by a density matrix. And then you trace over the degrees of freedom that you cannot observe. And finally, you calculate the entropy associated with this reduced density matrix. So this is, roughly speaking, what we are going to do for free fields, for conformal field theories, for certain regions. I mean, in general, this quantity is very difficult to calculate, okay? So we know how to calculate this for certain geometries. Only, for example, we are going to consider geometries with a lot of symmetries or theories with a lot of symmetries. We need the symmetries in order to do the calculation, okay? So let me start. As you see, the entanglement entropy has a strong relation with the density matrix. So let's start with some definitions for the density matrix and properties, okay? Okay. So in general, in quantum mechanics, you describe your state using a density matrix, okay? This is the more general way to describe a state. Expression for the density matrix, probabilities, okay? And this, let's say, that describes the statistical mixture of vectors. So this is the most general expression we can write for the density matrix. And let's see some properties. It's an Hermitian operator, okay? It admits vector representation. Now we are going to write almost the same expression, but now these vectors, these are positive numbers. And of course, there has to be one, okay? Then this is positive definite, okay? This has to be in order to have real probabilities. Then another thing is one. There's a way to write expectation values of operators in terms of density matrix, okay? And from here, for example, you can see if this operator is the identity matrix, you get immediately that the trace has to be one. And the last one, we will say that, let's define what is a pure state. So we will say that if the probabilities are zero except for one, then this density matrix represents pure state, okay? Also, in this case, improve rho squared is equal to rho, okay? Only for pure states. In general, what you have is that the trace of rho squared, this is one when the state is pure. So far, we are at this point here. Now we would like to introduce what happens when we trace and we define a reduced density matrix. So let's go here now. Is positive? Yes, you are right. Yes. In general, you will say that it's semi-definite, so we are going to use our cases where it's positive definite. Okay, let's see what happens when we trace partially. The idea was introduced by Dirac in 1930, okay? So the idea is that... Suppose you have two subsystems, A and B, corresponding Hilbert spaces, okay, and you have a global state in HA times HP, okay? So what we can do is to introduce a base like this, we can write using this sum over KL, and what we are going to ask is that the sum lambda KL is equal to one square, okay? Then for this state, the density matrix is just a portion here, and we can write explicitly this is the sum, okay? And then we are going to define the trace over B of this density matrix here, okay? And what do we get? Simply tracing over B, trace like this, okay? So what is interesting is that this reduced density matrix gives you the right expectation values for the operators when the operator in the corresponding subsystem. So this is... And now, from here, we can go next step, and we can calculate the entropy associated with this density matrix. This is usual for new entropy, and this is introduced in 1927. The same formula, we can use it with the reduced density matrix or with the global matrix, okay? This is just the quantum generalization of the statistical entropy that measures the number of microstates for a given microstate. Okay, so another interesting thing you can prove is that when you calculate the entropy associated with a pure state, what you get... So it's another way to prove or to test if a state is pure or not. You can prove it, okay? Another interesting thing is that the von Neumann entropy is maximal, and it corresponds to the log of n. Maximally, where n is the dimension of the Hilbert space. And here, I would like to stress this property because I'm going to use it for applications later. The entropy is additive dependent systems. What I mean is that if you take the entropy of a state which is just the tensor product of two density matrices, what you get is the sum of the entropies, okay? But in general, this is not true. What happens is that what you get is something called strong subadditivity. Strongly additive. Let me write it here. Suppose partitions 2 and 3. What you have is that entropy... I use numbers. 1, 2, 3, plus s, 2. Just 1, 2, plus s, and 3, okay? This is something that was proved at Lev and Ruskai in 1903, okay? And in general, what you're going to find is, as we are going to talk about regions, is that the entropy of the union plus the entropy of the intersection of regions, and on this side you have the entropy of... You have this to find the same expression as the entropy of the union. Let's call it the intersection B, s, or equal. This is the expression perhaps you will find in the literature, okay? So now, let's see what happens when we calculate the entanglement entropy. The principle of these properties I wrote here also works for reduced density matrices, okay? But let's see some special cases we are going to be interested in, okay? For example, the dimension... We are going to say that entanglement entropy is a good measure of entanglement only when the global state is pure, okay? Because only in that case you have this partition of the Hilbert space as a tensor product. So what we are saying is that... The same thing is the volume and entropy associated to the reduced density matrix. This is what we are going to call entanglement entropy associated to the region A. And suppose we have two subsistence, A, B, and the global state is pure, the reduced density matrices have equal eigenvalues, have the same entanglement entropy associated, okay? This is something you can prove also just using the definitions I gave you. So for example, let's take with two spins, we are going to consider a system, two particles, okay? And let's consider, for example, this state. This is one of the EPR pairs, okay? Bell states. What I mean with this is that the particle A and this is the particle B, okay? And you calculate the entanglement entropy of the state, you will see that gives zero, this is a pure state, but you don't get zero if you trace partially over B or over E, okay? You can prove it this as an exercise. So the idea is that even if you start with the vacuum, for example, with a pure state, you get an entanglement entropy if you trace partially, for example, you trace over half of the space or wherever I mean the region you choose, you can get an entanglement entropy different from zero, even if you start with the vacuum, okay? For example, so now we are ready to give a definition of entanglement. So we are going to say that part of the system, not entanglement, the global density matrix is just a superposition for states. The idea is that if the state, the global state, the only possibility is the trivial case, okay? If the global state, pure but, will have this property for entanglement entropy, okay? And in this case, entanglement entropy, the good measure of entanglement, in this case, okay? So we say, okay, we have all these definitions, we would like to see if this entropy has something to do with the thermodynamical entropy we know, okay? So let me show you that, in fact, the entanglement entropy is a little bit more general than the thermodynamical entropy, but enters in the same family in a way. So is it? Let's see. This is our question now. And for that, let's take a system, okay? In thermal equilibrium, canonical ensemble, we know how to write the probability. Just normalization, okay? This is just the probability of finding the system state, okay? And then once you have the expression for the probability, you can write the density matrix. Let's say an operator, it has this expression here, and now we know how to calculate the entropy. And this gives you just a function, okay? Do you agree with this? And on the other hand, from the thermodynamic side, we have the free energy is just the energy minus the entropy, okay? And from here, we get, I mean, the free energy can be written in terms of the log of the partition function. So you see, you get the same thing, okay? So the idea, what we can see here is that in general, the entanglement entropy is also a thermodynamic entropy, but it's more general because it comes from a density matrix that has in general this form here, where this is not the Hamiltonian, but something called Hamiltonian, or some people call it entanglement Hamiltonian. In the case the Hamiltonian is given by the Hamiltonian of the system, you have the standard thermodynamic entropy, okay? In general, it's not the case. And this quantity, the idea to find the reduced density matrix is equivalent to find the modular Hamiltonian. It depends on the problem. One thing is going to be easier than the other one, okay? In general, both are going to be difficult, but okay. So once we have all these definitions in mind, we can see what happens when we consider these quantities in quantum field theory. The problem is, I mean, the bad news is that these quantities become divergent, okay? It's not well-defined for a quantum field theory. But good news, I have good news also, that the structure of divergences that this quantity has has some admittance and an expansion in terms of the ultraviolet cutoff with a strong geometrical dependence. And there are some of these terms in the expansion that are universal. In a sense, what I mean when I say universal is that they are independent of the regularization scheme we use. So in these coefficients, there are some information about the theory that is hidden. It's universal information in that sense. So let's see what happens if we put our theory in the lattice where we don't, yes. For the modular Hamiltonian, this is the definition. Is the operator that appears here in the exponent? The point is you can always write the density matrix as the exponential of an emission operator. And this is called, the operator that appears here is called modular Hamiltonian, okay? This is the definition. Depending on the problem, you will have to find out which is this operator, okay? It's not, from the very beginning, you don't know what is the modular Hamiltonian. In fact, there are only few cases where we know the complete expression of this modular Hamiltonian, okay? And it can be a local operator, a non-local operator. So it can be a very complicated expression in general. But take this as a definition of the modular Hamiltonian, okay? That's the operator that appears in the exponent. You can always express your density matrix in this way, always. Here, this is for a thermal state, okay? For a thermal state, you know that the probabilities are given by this expression here, then you know that the density matrix is related with the probabilities. So, by definition, so this is the expression, the operatorial expression for the density matrix. Once you have the density matrix, you can calculate the von Neumann entropy associated with this. And what you get is this expression here, which is exactly what tells you the second loft thermodynamic, okay? So what I'm trying to show you is that all these different interpretations of the entropy as a measure of disorder, information, uncertainty, entanglement, all comes from the same thing in a way. Some definitions are more general than other ones, but the deeper are all the same. The point is that sometimes this operator here is given by the Hamiltonian of your system and sometimes not. In the case this thing is different from the Hamiltonian of your system and I'm going to show you examples, okay, it's not a thermal state, okay? Not the thermodynamic entropy. In fact, the entanglement entropy has a behavior different from the standard thermodynamic entropy, satisfies an area law instead of a volume law. I mean, there are many properties we are going to discuss that makes a difference. But in a way, I find this is a nice expression for the entanglement entropy because it shows you that the differences are subtle. I mean, it's not two different things, even the same family in a way. Let me at least introduce what I was expecting to do today, but I believe I'm going to finish next time. The idea is that in the lattice, these divergences of the entanglement entropy disappear. We are going to discuss in detail these divergences, but in the lattice, everything is well-defined. So, to both, let me see how far we can arrive. Both, we are interested in a region B, as usual, induce a partition in the Hilbert space and suppose our global state vacuum, and we are going to consider Gaussian systems. What I mean is that all the information of the theory is in the two-point functions. You have the weak theorem. So, for example, we are going to take the vacuum in, let's say, we have a collection of generators with canonical commutation relations, phi, phi. And we are going to give a name to the two-point function of phi. Let's call it x to the two-point function of phi. What I mean with the Gaussian system, let's say that the endpoint functions can be written in terms of products of two-point functions. This is weak theorem. This is what we are going to discuss next time. I will give you a method to calculate the entanglement entropy in terms of these correlators. Perhaps it's going to be clear this idea of writing the density matrix as an exponential of a Hamiltonian operator. We will see that the eigenvalues of this Hamiltonian operator is related to the x and p two-point functions. So we are going to get an expression for the entanglement entropy in terms of these two quantities here. Well, the idea is that when this Hamiltonian is given by the Hamiltonian of the system, then what you get is the thermodynamic entropy. Otherwise, in a different case. So in general, it's not going to be true. The point is that... This comes from quantum information, this idea. But if you want to measure in a way entanglement, it's a measure of the correlations between the degrees of freedom that lives inside and outside your region. If you don't start with a pure state, you cannot write from the very beginning your Hilbert space as a tensor product. It's not possible. Or even if you're... I mean, there are too many ways to say the same thing. For example, if we are going to discuss when... for gauge fields that... In fact, you can have your... The fact that you choose a region is equivalent to choose a local algebra associated to this region. This is perhaps the way mathematically well-defined to say that you choose a region. What you choose is a local algebra associated to this region. In general, this local algebra can have a center. If you have a center, a center means that you have a set of operators that commute with everybody, then you cannot write your Hilbert space as a tensor problem. In that case, you don't have a good definition for the entanglement entropy. Okay? The links between them. But my question is that why are relating the decomposition of the Hilbert space to why entanglement is not a good measure? I understand that entanglement is not a good measure for non-pure states because there are mixtures of quantum effects and non-quantum effects. But I was thinking always that the decomposition of the Hilbert space for non-gauge theories, I'm saying, say let's just focus on scalar field theory, then the decomposition is not related really to what state you are looking at. No, no, no. Of course. Of course, these are two different things. But the point is that if you have or not the composition of your Hilbert space as a tensor product, it has to do with the existence of a center. And this is something that can happen also in scalar fields. It's not only on gauge fields. I mean, the naive explanation we have that we say, okay, the excitations are not point-like, so you have the Wilson loops, and then you cannot divide your space so easily. It's not true. In general, also for scalar fields, you can choose your local algebra in such a way that you have a center on the boundary. So it's more general than that. But it's true. We are talking about two different things. One thing is if you start with a pure state, then the reduced density matrix gives you an entanglement entropy that is a good measure of entanglement. And you are right that these are two different subjects. Yes. Any questions? Thank you.