 Well, first I would like to thank the organizers for inviting me here. I really appreciate it, because it's really for me an honour to be part of this very selected group of scientists. Of course I wish Boris a very happy birthday. I would be more modest than most of the speakers and I would be extremely glad if we could meet here in ten years time altogether again. I would really feel very happy. I want to talk about numerical simulations on many body localizations that I have been doing for the last two and a half years. They are a completely different approach to any other type of simulations. I have been collaborating with Andres Somoza at my department. The talk is divided in two halves, completely separated. I collaborate with him in the first half and with Luke Redmaker, who is a postdoc at KATP for the second half of the talk. Okay, this is a very brief outline. First I introduce the model. Then the first half of the talk is the percolation approach to many body localizations. And the second half is a practical way to obtain local integrals of motion. I think that the results are still very preliminary, but I think that that's probably an extremely good candidate for numerical simulations of these problems, as I will try to show. Okay, this is just the rough motivation. I'm sure that we all are aware of these things without interactions and strong enough disorder. All the states are localized, and then the big question is whether interactions can mix these localized states and form extended states, and whether they produce a transition. So that's what I want to study. Vasco et al. would produce the seminal paper in 2006 and try to follow as much as possible this paper doing the numerics of what is in there. And the problem with all the simulations up to now is that they mainly involve a side diagonalization, and then the sizes that they can handle are extremely small. Okay, to be specific, the model that I'm considering is just a first one particle Hamiltonian that has a disorder on site energy and a hopping element. And then I diagonalize this one particle Hamiltonian. I obtain a localized basis, and then I write in terms of this basis the total Hamiltonian that has first a diagonal part, and then the interaction that is known diagonal and is what is causing problem, and what is... We want to see if this interaction can delocalize or not, or to what extent delocalize the states of the system. For the potential, I assume I will only deal with fermions. Okay, my unit of energy is t is equal to 1. It is my unit of energy. If I put it on v, I put it also equal to 1. I consider this neighbor interaction, periodic boundary conditions, and the first task is to compute all the matrix elements between all the wave functions. So I did that. All relevant information is contained in this ratio, the ratio between the matrix element and the energy difference that they involve. So I plot here for different values of the disorder, the distribution of this absolute value of this matrix element, so you see that this is a number of transitions per site. So as the disorder lowers, there are more and more transitions that really couple the states. So in the second part of the talk, I will consider all transitions exactly, so I will take into account this distribution precisely. For the first part of the talk, what I do is I just stay with these transitions, the ones where the interaction is larger than the energy difference, and I will assume that configurations resonate whenever this happens. I will have similar weights so that I don't bother much about the weight. This is, of course, an approximation, but for this problem, given that it has a huge effective dimensionality, the percolation approach, I would expect it to work quite well. That's the reason to do this study. Okay, so I studied this number more precisely. What is the number of resonances as a function of size and localization length, or disorder? So I put it here, localization length, and here each curve is a different size. I plot here first 20, that is the larger system that can be handled by exact diagonalization, and then you see that the region of interest is around here, where the transition is expected to be around here, so you see that 20, really, we want to see the transition, you know, it's more or less when this number is equal to 1, so it's around here, but then 20 deviates from the rest of the system. For the rest of the system, the number of transitions is already perpendicular to L. I got that they vary with the localization length with a power of three and a half, that I don't know where it comes, I mean this is just numerics, I don't know if... I haven't seen it in the theoretical paper, and I plot here the fluctuations. The fluctuations are huge in this problem. They decrease as L to the half, but even so, even if they decrease with size still, for all the size that I can handle up to 100, fluctuations are huge and create most of my problems in my simulation. So, what I do is I choose a configuration at random, so I can deal with finite temperature, but in this talk I will concentrate with infinite temperature, so I just choose a configuration of one particular state at random. And then, if one interaction produces resonance, then I calculate with all the other configurations that resonate with this one. For example, here there is an example, here for example the interaction, this particle comes here, and this particle comes here. Usually, as we want a very small energy factor, we must be at high temperatures so that one electron goes down in energy and the other goes up in energy, and the energy is basically conserved by these two electron transitions, so the energy of these two configurations are very similar. The energy difference is smaller than the interaction term. So, I start from an initial configuration, and then I calculate and store all configurations that resonate with the initial one. Then I repeat that, storing in each time, in each iteration, I store all configurations that resonate with previous ones and have not been included, so the system works like layer by layer. A technical thing, a very important, but just a technical thing, is that I only have to store three active layers, because it's a percolation problem. In these calculations, I can store 10 million configurations in each layer, and so the total number of configurations that I can deal is more than a thousand million configurations. The limiting factor is this, 10 to the 8, and then what I calculate, and I will present here, first is the size of the cluster, to see how the size of the cluster depends with this other, and I also calculate at the end how I put a virtual surface and then I study how many particles cross this surface, so I will present these results at the end. First, let's concentrate with the size of the sample and let us start by the very insulating part. In all the figures, as often there are samples that are too big, then the way to control that is to study the accumulated probability. So what I represent here is the probability that a sample has a size larger than a given one. So all curves end up in one, because the total probability is one, and here these are very large samples. The way to scale this data for different sizes is by plotting the log of the size of the cluster divided by the total volume corresponding to this size. This is given by that and basically is 2 over L. So basically this is the log of the number of configurations divided by L, by system size. That's the way to compare different sizes. So let's start, if there is a transition, as I said, it should be around 6, so this is the very insulating sample. Then this is size 20, 30, 40, 50, 60, and 70. The first thing is that 20 is always the case. 20 always behave very strangely, so 20 is too small to really see the asymptotic behavior. That's the first thing that you see. So here in the first part of the plot, the curves behave well, but then we have this kind of shoulder, and as I will show you later, this shoulder is due to fluctuations. So I think that this smooth behavior is modified by fluctuations. So then the idea is that as size increases, eventually there are no more extended trajectories, so I think that the asymptotic behavior should be something like that in the very localized regime. This is a smaller, slightly smaller disorder, the same things. Okay, so at the end this will be localized, and I think there is no question here. Then let's go to the, yes. You want the previous one? Yes. No, no, no, 10 to the 8 is the size of a layer. Okay. The cluster can be thousands of millions, much, much larger. In the localized regime, you are always far from the size of the layer, so you have some kind of fractal behavior that keeps you going forever without touching these limits, so I can deal with huge sizes. In these figures, I never reach saturation. Here everything that is there is under control. Now in the next slide, this is not the case. But here everything is under control. Okay, so this is the interesting region which is out of control. Of course, as you should. So here is the disorder. I continue lowering the disorder, so here the behavior is the same. I keep changing the vertical scale. The horizontal scale is basically the same. So by the way, this is the total. One around here is that one state occupied the whole phase space volume. So here is the same. Okay, seven I think is still well localized. And six I think is around the critical region. So here you see that the first part basically coincides. And so here all curve seems to saturate at a given value here. So I think that up to now that it will be consistent with having insulating states for disorder less than six and a transition around six. What is not so clear is what's happened for disorders more than six. Let's concentrate first with four. Four is a good metallic sample and here the situation is also pretty clear. As we increase size, we tend to one. So this tends pretty well to... So four is a metallic sample, but then five is not so clear. Five should tend also to the metallic sample, if we expect the transition to be C. But if you see, here this is the... So here you see I have to cut to be with control. I have to cut the curve sooner because I reached my limit. But up to this distance, I'm pretty sure. So this is the largest size I can handle. So it seems to saturate here. So one could think that there is an intermediate region with fractal states or something like that. That's pretty compatible with all the results that I get. So these are my conclusions. What I said is that there is a transition around six. Above this transition, the system is localized. No question about that. Below disorder four, the system is metallic well behaved. And between six and four, I really don't know what to say. And all the problems are created by fluctuations, not by the fact that the face space is very large. And I can do it because I did a simulation of the hypercube. One Boris Vladimir asked me if there are simulations on the better lattice or the similar lattice. But I think that the best thing is the hypercube. It is a similar thing. After you said that, as there are no simulations in the hypercube, I did a simulation in the hypercube. So it's just that I don't concept the number of particles. And then in the hypercube, you see that here there is a well-defined transition. The insulating parts are the same as before, but we don't have the shoulder because the shoulder is produced by fluctuations, but the shoulder at the end vanishes. But here you have a well-behaved metallic part. So you see that all the curves above the transitions are flat. These are the extended states that have a soup with several lens distribution. But there is no, like in the other case, a situation where it seems to saturate to something decreasing. Here everything is flat. So I think that my conclusion is that in the hypercube, percolation produces a very well-defined transition. In the many-body localization, it may be an intermediate region with fractal behavior due to fluctuations. Of course, I don't know if these fluctuations are just for the size I'm considering of the other of 100, and they disappear for size 1000. Then I also study, as I said, the fluctuations in the number of particles. I have a periodic system, but then I calculate when I change from one configuration to the other how many particles cross an artificial, a virtual surface. And I study the variance of this number as a function of disorder and size. What we expect is that if the system is localized, then this variance is constant. Only a state near the surface will cross. If the system is metallic, this should be proportional to L. So I plot here sigma squared divided by L, and we expect that they go to zero. But faster and faster, and system size increase, and they go here to a constant. Here you see that this is really the expected behavior. So again, the insulating regime is well under control. All these samples are insulating. The transition is somewhere around here. If these were a normal transition, probably I would expect all these curves to do the same as here, in a symmetric way. The 20 does this thing, but the 20 is very anomalous as we saw before. So the 20 is really unreliable. So the other things, I don't know if they are more or less straight. I'm making a big effort to put more precision here, but that's a very difficult region to study. So I don't know if they continue something like that, and then they have like another transition here, somewhere here, or something. It's unclear. Again, the situation between four and six is very unclear. This new measurement doesn't give me more light into this intermediate region. Okay, so this is the... I start here the second half of my talk. It's relatively little to do with the first part. The idea is to get local integrals of motion. There are many papers on that, but for my practical purposes, they are too abstract. You know, they have to integrate over infinite times and things like that, so these four practical simulations, I couldn't see any of them that works properly. So last year I was at Santa Barbara for a year, and then I have a lot of free time, so I dedicate a lot of time to this problem. And then... Okay, let me first... Let me talk about Santa Barbara later. First, the idea of what we want to solve with this local integrals of motion. This is the same Hamiltonian as before. I just wrote here these two operators as density operator. And then the idea is to make some kind of unitary transformation so that you transform the number operator into a new operator such that in this new basis the Hamiltonian becomes classical. And then everything is easy to... So this is, in fact, a way to diagonalize the Hamiltonian. Then there are theorems that prove that these things exist and so on. And as I say, ways to get them, but at least I couldn't work with them in practice. Then in parallel to that, so I look at that and this was... I mean, to get that in practice was unsatisfactory, but in parallel, I keep trying as the size of the cluster diverges exponentially. So I tried to perform a change of basis so that instead of destroying an operator, I changed to this base. So this is to try to follow the idea of Vasco et al. So I tried to make this base change. So I was playing with that. I was talking to everybody in Santa Barbara about how to do that. And at the end I found a postdoc and we started to work on that and we were relatively successful, although not completely. Then I went back to Spain. He went back to Holland to get married. And after getting married in his honeymoon, he started to work extremely hard on this problem. And he came out with a complete solution and I'm extremely happy with this solution. So he put this publication in the archive and I'm working since then in all the practical details of the implementation. And I think it's a wonderful, wonderful practical approach for numerical simulations of many body localizations. And I will try to present the main results here. So the procedure is based in what are called displacement transformations. So we have any, this is just for generality, any operator of this form. So we have a set of number operators and then creation and annihilation operators. Here I assume that these indices are never the same. There is the same number of creation and annihilation operators so that the particle number is conserved. So any operator of this form. Then the transformation I do is the same. The exponential of lambda is a parameter that we will adjust. And then here we have this operator minus the adjoint. Then in practice for this problem the x that I will consider is just, is what is multiplying the interaction. So I will consider that in practice but this what I'm going to say is fairly, is very general. Then it's easy to check that this, this operate, if x is of this form and this is the definition. In practice this operator only has these, these three terms. You see that this is one, well this is quantum term but this is, this is just, is a product of numbers. So this is the type of things that we already want. Here is, so first thing, let's transform the number operator. The number operator you put, you insert this thing here and this thing here. Then become this thing. So it, it transforms to itself then a quantum term and a classical term. Then the sign, the sign here depends on whether this is in the, is the, this, this or this or this. Whether they are the first two indices plus or minus. Then the, the interaction is a term like that. The interaction in the Hamiltonian is a term like that. It transforms in this operation as a new quantum terms and again a classical term. So if I put this and this into the Hamiltonian, the transform Hamiltonian is the following. It's the original part of that, this comes from here. Then transforming n, I obtain this thing here and the changing signs. These are the, the change in sign comes from the, from this changing sign. So this, this come here and then the interaction give me this, this cosine. All, all these terms are multiplied by the quantum term and then the classical term is multiplied by that. Okay, what we want to do is to cancel the quantum term. So we choose lambda such that this big thing here becomes zero. So lambda is, sorry? Oh, phi is the energy. I put it in the, phi, phi is the site energy. I call it like that. Here is phi. So the idea is to cancel the quantum part, the quantum term by choosing this equal to zero. And this give me lambda equal to this thing. So that's exactly the magnitude that we found before. So the only thing that we will remain in the Hamiltonian are these, these classical terms. To show that this works, I put a very easy example. I hope that you don't mind if this sample is too trivial. But let's solve with this procedure the very easy example of one particle in, in two sides with different energies. So usually we have this problem. This is site energy in one, site energy in two and the hopping element. What we usually do is solve for the eigenvalues of this problem. So I'm going to do the same thing with this displacement transformation. So this is my Hamiltonian. Now this is what I want to, to get rid of this time. So my x now, as I said, the previous expression is fairly general. My x now is this. So the times that I get from the same analysis as before is this, this ratio. Then the number operator one changes like that. The number operator two changes like that. If I apply that, if initially I put a particle in one and the other in zero, then I get that is one minus sine square. The other is sine square. And if this is tensed, you can work out easily that sine square is this. So I obtain exactly the same as we do diagonalizing the two body problem, the two side problem. I obtain it with the same amount of effort. Let's say with this method, but the thing is that this method can be repeated systematically and very efficiently. So let's continue with this method. Then once we know how one transformation operate, the question is how to successively operate. So how to apply one after the other. So then the idea is we arrange all the interaction elements in order of V over delta energy, so that we start by the highest interaction and so on. And then the thing is that when you perform the transformations are coupled, of course. And then the thing is when you transform one quantum term, you produce two other quantum terms, but usually they are with a smaller matrix element. So you take that into account, then you keep performing these transformations for starting from the highest values of these parameters. One can check easily that the procedure converts extremely, extremely fast and then an important thing is that doing that, when we consider a transformation with a given number of operators, it never generates terms with fewer operators, so that the procedure is always stable in the sense that what I'm doing now is eliminating all the four operators' terms in the Hamiltonian. Typically here, for example, if I consider a precision of 10 to the minus 3, I have around, in the critical region, I have around a thousand transitions to consider, a thousand matrix elements to consider. And then, due to that, I have to consider about 10% more, so at the end of the order of a thousand transitions, I have a very, very good convergence. Then, of course, the unitary transformation is just the product, the other product of all these displacement transformations. So the results, as an example, these are still preliminary. In fact, I had to do two small approximations that maybe... Oh, sorry, this is the wrong figure. It's the wrong figure. This is almost the last... Sorry, let me... Probably I numbered in the same way, so let me get the figure. Let me go to... Let me see if the whole version is the right figure. Okay, this is the right figure. So I'm plotting here with this average value. So what I do is just I put the... My initial configuration is a set of occupied and empty one electron particles, and then I calculate if in the total thing here, in this new interacting basis, what is the probability that they remain. So in the very localized regime, we assume that they remain on the same side, and then I plot that as a function of this order. So what you see is that for all sizes... So the maximum is a half. So if we have a metallic regime, this should be a half. Here, the system becomes unstable, probably due to the two approximations I'm doing, and I'm setting up the two small approximations I'm doing just for the moment because I didn't have time to get the... These results have been obtained this morning, the last point. So I have to get up at seven in the morning to get the last point. So the only way to get the results here was with two small... Two approximations that are very well under control in the whole localized regime, and in the extended regime, I don't know how they will work. I mean, but I will implement them soon. The thing is that you see that as you lower the disorder, states are more and more extended, so the probability that you remain on the original state decreases, and then at the end is a half. So here, there is an extended regime, and here it seems to be a transition, I would say, around here. Well, this is just the last slide to conclude the things that one can calculate with this procedure. First, one trivial thing, because it's almost the same, is the correlation between two number operators. Then a very easy thing to do is calculate the quantum Coulomb gap because then you generate all the new terms in the Hamiltonian, putting a long-range interaction, of course. Putting a long-range interaction, then you will have new extra terms involving two density operators that will modify the shape of the Coulomb gap, and probably I think that the transition can be studied without seeing when the Coulomb gap becomes zero at the Fermi level. Also, do level statistics. The same procedure applies to any dimensionality, so do higher dimensions, and of course, one important thing is to study the convergence of this method, mainly in the extended regime. Well, thank you very much for your attention.