 Okay, I would first of all like to thank the organizers for giving me the opportunity to speak on this wonderful occasion of Boris's birthday. And actually, the subject of the talk started, was motivated by Boris. At the time, I was trying to understand how you can show numerically the many body localization transition. And I had the hunch that entanglement would be a good way to study this phase transition. About that, the jury is still out. There is some tantalizing evidence which I'll talk about towards the end, but not no conclusive smoking gun. But anyway, along the journey, several interesting systems for which the entanglement helps to understand and quantify quantum phase transition came out and I'll share with you these systems. Okay, so first of all, what is entanglement entropy? And in very simplistic, simple way, it's basically you divide your system into two regions, region A and region B. The whole system is in a pure state. And therefore, if I have two regions, I can define a basis for region A and a basis for region B. And I can write the pure state as some combination of the basis of region A and the basis of region B. Well, in some sense, this is almost trivial. Interesting to note is that in the good old days, when you wanted to present a slide of a mad scientist, you would show someone in a white lab coat with flowing gray hair, writing E equal MC square. Apparently, nowadays you need to write this equation in order to show a mad scientist. Anyway, you can actually do better than that. Instead of having a tensor with two indexes, you can have only one index. And that's by using the Schmidt decomposition. For the particular states that you are considering, you always can build two basis of region A and region B where it's enough to take this combination of vector instead of a tensor. How do you do it? You do it just by writing down the reduced density matrix, diagonalizing it, meaning that I trace out either region A or region B and I get a reduced density matrix. And actually, the coefficients, the eigenvalues of this reduced density matrix is the entanglement entropy, just as a regular Shannon or von Neumann entropy. This has some desirable properties. The most obvious one of them is that the entropy, if we're calculating the entropy of region A is the same as the entropy of region B. And basically this entanglement entropy has drawn much attention on the general properties. And one of the most celebrated results about it is the area law basically that the entanglement entropy is proportional to the area bounding region A connecting it with region B. Okay, very nice, but why does it interest us as a condensed matter physicist? I find two main motivations. One of the most exact numerical methods to calculate the properties of ground states of many particle physics is actually based on this ideal. So when it was developed, it wasn't presented in terms of entanglement entropy, but it has the same ideas. And second, it's a very, as I'll try to convince you during this talk, it's a very, very sensitive and good way to identify and quantify quantum phase transitions in different systems. Okay, let's look into the first example that I want to look into. It's for a one-dimensional system, we define the area law, D is equal one, so therefore the entanglement entropy is supposed to be constant. Actually, there's an additional logarithmic correction to it, which gives us a lot of information and the logarithmic correction is proportional to the length of the region A. Okay, if I have some finite correlation lengths, for example, but no, we will see later that also for the Anderson transition, the localization transition, the same behavior appears, it is bounded, it saturates on the length scale of the correlation length or the localization length. Okay, so let's be a bit more concrete. I'm taking the most generic one-dimensional Hamiltonian, meaning I have on-site energies that I can use also for disorder by defining some widths of it, nearest neighbor hopping, nearest neighbor interaction, and here very naturally I divide the system into region A of length LA, and region B with the remaining length. The system actually without disorder is a simple, we know how to solve it, it's a Latvian liquid and a lot is known about it. Actually, also the entanglement entropy of region A has two additional corrections, which I didn't present in the previous slides. One is a finite size correction and another is if I have finite fillings, the term that I gave before was for half filling. So basically, close to half filling, the entanglement entropy changes very little. Actually, only from this information, one can already characterize one nice quantum phase transition and that is a quantum phase transition connected with the properties of a ladder and it's actually quite a straightforward behavior. If I had such a system where I have some hopping along the ladder and some transverse hopping and if I assume that they are equal, so as function of the filling of this system for the non-interacting system, I have a very simple behavior because basically I have two one-dimensional bands sliding one towards the other because of the hopping between them and then if I'm filling them up with electrons up to a quarter filling, I have only one band which is filled there for only one mode. Then I have two modes and if I'm continuing to fill it up, I have only one mode, trivial. Interestingly, if I put in interactions and the simplest case is just perpendicular interaction, I will have a situation where although I'm filling up the second band and I'm supposed to see a second mode, this mode will be frozen up and I see only one mode part of the way and only for higher fillings close to half filling, I'll see two modes that go through the system. Okay, this was predicted by Medvederkin and actually it's very easy to see it from calculating the entanglement entropy. Let's look on the black curve. The black curve is just the case where I'm considering the non-interacting behavior. It shows the same behavior times two which we, from quarter filling to half filling, discussing your system with 200 sites. It just shows the same behaviors that I've shown you before about the entanglement entropy. Red, green and blue are stronger transverse interactions and let's look on the blue, it's the clearest. We start filling up additional electrons into the second band but the entanglement entropy doesn't change simply because it's not an extended mode. Suddenly we at the point where we have two extended modes very clearly we see that we are jumping toward a second mode and going back to almost to the entanglement entropy of the clean system. So in a sense this is the clearest way to see a quantum phase transition and you just need to count. And actually as Rabbi from the same institute the Boris is in like to claim, the best experiments are the ones that you only need to count in the nothing more. Okay, let's move to another system that we know that shows, well, a quantum phase transition and first of all the behavior of the localized regime. One dimensional systems we know are always localized and if we are in an always localized regime we expect that we have some finite size, some finite localization lengths that characterize the systems according to our previous arguments. We're supposed to see there that our entanglement entropy saturates on this localization length. If we add interactions, though we know pretty accurately the behavior of the localization length as function of disorder. We know also that if we are discussing an interacting system with repulsive interactions, G is smaller than one, therefore the localization length becomes smaller and smaller as we look on stronger and stronger interactions. Okay, can we see this behavior? So first of all one can see the finite size corrections. We see a logarithm for clean systems and quite big up to 1,500 here. We see a logarithmic behavior almost till the middle of the systems and we have this finite size corrections appearing here and everything is good. For short systems everything falls atop one other and then we start to see the finite size behavior for longer and longer systems. What happens if we put in disorder? Okay, here we are looking on the simplest case, disorder, no interaction and indeed the localization length for a localization length that we know to calculate is around 60. Indeed, for different length scales we see that all of them saturate at around this value of the localization length and we see that actually the length doesn't play much of a role as you expect if the localization length is much smaller than the systems. So you see the saturation in the entanglement entropy quite well and you can even read off the saturation points, the localization length and get a good agreement. What happens when you take into account interactions? Basically indeed you see the behavior that you expect. Let's see here we have here the non-interacting cases saturation occurs somewhere. Here around 200, we increase interaction, red, green, blue, the saturation occurs earlier and earlier and if you want you can even fit it to the form given here. So entanglement entropy is a good way to follow after the localization length, meaning we can pull out of it information about the systems and if you think about it, calculating the localization length for a full many body interacting system is not trivial and for my experience this is the easiest way to do it. Okay, but we're looking for somewhere that we have a phase transition. Yes? In my numeric, what is the origin of the error bars? We are sampling here different realizations of disorder and it's very important to see their widths. We'll see in a couple of slides that actually that's also a way to characterize a quantum phase transition. Maybe even better than looking on the average or the typical value. Okay, so where can we get your phase transition? If we take into account not only repulsive interactions but the proactive interactions, then if we'll just play with the formula given previously, we'll see that if G is equals three halves, the localization lengths will diverge and actually it's a point where we start to see superconducting correlations. So here we have an insulator but actually if we'll go back and try to do the previous trick, meaning reading off the correlation length, the localization from the saturation of the entanglement entropy, we have here a problem from the point of view of numerics because as you can see here, here's a localization length in the vicinity of the transition grows very quickly and we'll have to go to extremely large systems in order to see that we are here in an insulator and not in a metal or a superconductor. So actually it's not a very good way to characterize what we are seeing in this region. Fortunately, and that's the answer to your question, it's not only, look, this is the average entanglement entropy, both of them for the sizes that we see show no saturations are almost identical. On the other hand, if I'll plot the distribution of the entanglement entropy, meaning cutting the system to a size LA for different realizations and reading off the entanglement entropy in each case, if I'm in the region where my size is much bigger than the systems size, but nevertheless, I'm not in the superconducting regime, I see more or less a Gaussian behavior, a well-characterized behavior of the distribution of the entanglement entropy. On the other hand, if I'm in the superconducting regime, the distribution suddenly becomes very different. See, a distribution which is very skewed, has a very long tail and generally looks very different than the Gaussian behavior. Actually, one can do better, one can even scale all these distribution, and one can get what's known as statistics as a levy stable alpha distribution, which basically from our point of view has a power low tail to the left. So when I was looking into it, because there is a connection between entanglement entropy fluctuations and the number of particles in each region, seem to me logical that you would see also some long tail in the number of particle distribution in each part when you're cutting a system, although the whole system has a given number of particles, but when you're cutting it somewhere, of course, you will have fluctuations in the number of particles. I told Boris my hunch two months ago, he told me, no, you won't see it, and that's for you, indeed, you don't see it. The distribution in the number of particles is Gaussian for the metallic, the black, and for the superconducting regime. So this has more physics than a simple fluctuation in the number of particles within it. So what else can we pull out of the entanglement entropy? Remember that my original motivation was trying to understand the many body localization. Until now, I talked about ground state properties. Ground state properties will not show us many body localization. On the other hand, with any method connected with DMRG, any numerical methods of this type has a very hard time moving to higher excitation energy. So in a sense, so what can one do? Actually, there is some hope in the fact that actually the entanglement entropy is some summation over the eigenvalues of the reduced density matrix. But as noted by Leon Hauden a few years ago, the eigenvalues of the reduced density matrix hold some information about the excited states, and the logic behind it is quite straightforward. Basically, what are we doing in the reduced density matrix? We're cutting the system into one part, into a region a, and summing out region b. If we are for the low-lying excitations, they are not very strongly coupled to region b. Just because it's density of states arguments, and therefore you can hope that there's some correspondence between the eigenvalues of the reduced density matrix, and the eigenvalues of a finite section of the many particle system. Okay, sounds intriguing, but can we show it? Okay, first of all, it's very important to try to see if we see something of the behaviors that we expect from disordered many particle excitation. Spectrum in the behavior of the spectrum of the reduced density matrix. So for single particle systems, we know very well Boris in a seminal paper was pointed it out. Basically, localized regime will have for single electrons a Poisson distribution, no level repulsion, the metallic regime, we can have G-O-E or G-U-E, depending on the symmetry of the system. So we have a transition from Poisson to Wigner. If we're looking on many particle excitations, the game is a bit more complicated because let's think of the simplest case where we have a many-body system, the non-interacting case, then basically we have our single electron levels, and we're just filling up states. The ground state is very simple, fill up all the states to the Fermi energy. The first excited state of the many particle system is again very simple, you're just moving this electron one level up and you will get a distribution which is equal to the single electron distribution. On the other hand, if we are looking on some I-thic citation, we will see that the populations, the fillings of the single electron orbitals will be completely different between two neighboring states. Therefore, there should be no overlap between them, therefore no level repulsion, and basically we'll see Poisson statistic and another consequence of just combinatorics is a huge increase in the density of states. Okay, so let's look on the excitations of a finite segment of a many particle system, and if I'm just looking on the average, I see a very peculiar behavior, meaning, okay, on general it goes down, so that's good, but I see here some structures, the first, second, fourth, sevenths, and so on state has a much larger level spacing than its neighbors. Moreover, it shows also different statistics while the levels with the small level spacing shows expected Poisson behavior. The ones with the large one show something that is much closer to the single electron spacing. The good news is that also the entanglement spectrum shows exactly the same behavior. So the positive thing about this comparison is that more or less for the low lying spectrum of the reduced density matrix, we see exactly the same behavior and distributions as we see for the excited state. This is lost for higher excitations as expected, you already don't see these sharp, you see remnants but not sharp peaks. The question that remains is why do we have this structure and actually the answer is we were dealing here with ballistic one dimensional system and in this case, basically when you are looking on the excited states of the many body system, you're basically going back to an old problem in mathematics which is basically just the question, it's known in mathematics as a partition function, it's not the physical partition function. It's just a question if I have an integer in how many different ways can I build it up out of other integers and basically the peaks here are the points where I filled up a shell, I don't have any other way to build up the same number but I have to jump to a higher energy and start and of course as we go up in energy, this occurs less and less frequently and actually you can exactly calculate these numbers and see that they fit, so that's a nice curiosity. Okay, so what do we, so we saw that we have your shell structures that are more single particle in nature, first question that we can ask ourselves is for the many body system, what is the behavior that we can see as function of disorder and indeed we are going back to a more and more Poisson behavior for the many particle system as this order increase. More interesting, if we put interactions into and we change the strength of interactions, the peaks that we saw before because of the shell structure are wiped out as with stronger interactions, although the peaks very close to the low lying excitations are very robust, the higher ones are wiped out, this is maybe some manifestation of delocalization in folk space and many body localization. Actually, if you're looking on the distribution, it goes from Poisson to more and more GOE, again, as expected from naive expectation of the many body localization. It's interesting that the same type of behavior also remains in the superconducting regime, even the same, more or less the same shell structure and it's very nice that the reduced density matrix actually shows differently, shows its symmetry also in the reduced density matrix, eigenvalues that crosses over from GOE to GUE there, which I find very amusing. Okay, what can I say about the many body localization behavior, so I'll skip this one. Okay, so what do we expect? Okay, we know the argument about for finite systems, the fact that low lying levels don't have the density of states and therefore their widths remain small once we go to higher excitations, they become larger. We know that we expect as function of temperature or excitation energy to see a transition for interacting systems from an insulator to a metal behavior. What can we do with our numerics? Conductance is almost impossible to calculate for these systems, full level statistics for finite interacting systems is prohibitively. Harder and people went into different models of infinite temperature since then there are much more where you are looking on a highly excited state. But can you see something for the transition point where you expect that if you are playing with the interaction strengths, you will see a transition between localized and delocalized behavior with weak interactions. You'll see localized regime, you turn on interaction, you'll start to see a delocalized behavior. Indeed with this entanglement entropy, we can't go into very high excitations. Also with the reduced density matrix, we can't go very high but we can insert strong interactions and therefore hope that you'll see something for even for lower lying systems. Okay, so what do you see? On one hand, something very encouraging. You can see here you start with something, the black curve, which is a number of times an interacting case and we see a Poisson which is in line with what you expect from is a localized regime. You're turning on interaction, you see a transition towards GOE, so it seems fine. We have here what we are looking. That's a traditional way to see in one particle systems the transition between localized and extended states. The point is that usually for single particle states, you want to see this behavior as a finite size behavior. And the point that is very bothering here, you see I'm changing the system size, 300, 700, 1100 and you see almost no change here. So it waves you a flag that something here is not usual. You don't see here a straightforward localization, the localization transition. So we can see that part of the problem are these type of shell states that get mixed into the problem and cause some skewness. But my guess is that is the points that I'm working at the moment and trying to understand better. Is there something to do with the non-ergodicities that Boris predicts there and somehow the fact that we have an intermicture of these states skews the regular finite size behaviors that I expected here. So the summary of the whole business is basically entanglement depends on the correlation and that's the way it shows us quantum phase transitions. That's a very good way to see different types of transitions as I have described in the talk. We can use not only the average entanglement entropy but also its distribution and the eigenvalues of the reduced density matrix to show this quantum phase transition and perhaps it opens a window to look into many body localization. And finally, the regular Jewish blessing till 120 so you have another factor of two.