 Good afternoon to everyone, thank you for the organizer for inviting me to talk in this conference, and I will address a problem which has been partially mentioned in the, in the, in the, in the past talk. Indeed, I will talk about how tensor network need those can be used for benchmarking content simulations. And more specifically, I will talk about the work that we have been doing recently in our group with the same amount and zero and and the question that we are going to address is this one. How efficiently can you simulate high dimensional systems indeed. In the last talk that for one dimensional system we have very, we're very well established solutions but for higher system the question is is more is a for high dimensional system the question is still under under debate. Why do we ask ourselves this question. Well, because recently, for me platforms, many experimental platform for quantum anybody systems, something like ultra cold atoms in optical lattices or chains of, of, of neutral atoms excited to reverse states or, or superconducting qubits for the realization of quantum computer trap and ions. It has been reached a level of control of isolated quantum anybody system and for sizes. Large enough that one ask. What can we do with this system, are we really able close to what Feynman told about the realization of a, of a, of a quantum computer. And indeed, what we exploit in the in these systems characterized by a large in principle number of, of constituents is the fact that they develop quantum correlation that that leads to an, to entanglement which is a purely quantum phenomenon that can help us in simulating this phenomenon in more efficient way that then we can do with the classical system and indeed, for example, we can use that for study punch dynamics of system ground state properties, and in the quantum simulation field for example, this picture is, is from paper is from a study where rapid ions have been using to simulate quantum simulate really in a, in a laboratory, the pair production of electron and positive in the one dimension or QED, or the platform that I mentioned before, are used for the application of, of, of the cubits and, and, and register of, of cubits for the application of contents here with, and the, and the, and the performing of, of, of quantum computation, but luckily, the level that so far is reached by this platform is the so intermediate scale quantum computer this means that there are still many, many errors due to environment imperfection in the realization of the, of the protocols, and at the end of the day we need the classical numerical simulation to benchmark and validate these kinds of experiments and this means that at the end of the day. We, we might think that we have to deal with the exponentially large silver space in which the state of, of the platform that I mentioned before, leaves, but luckily we ask ourselves how much entanglement are we able to encode when we perform our, our classical numerical simulation because the point is here is a, is in this fact that we need so much resource because we need to, to capture all the correlation among the, among the constituent of the, of the systems, but luckily the answer is, is encouraging from the numerical point of view because more than asking ourselves or also. We just need to ask ourselves how much entanglement are we, are we able to encode but we also might ask how much entanglement we actually need to be able to encode because at the end of the day, apart from very peculiar class of the, of the system all the interesting physics in this instance is, is, is contained in a small subset of the, of the full Hilbert space of the, of the quantum anybody system whose properties I will, I will explain new now. Indeed, let us consider that we have a one dimensional model with L size, for example, consider a lattice and the local Hilbert space of it, of the site as a local dimension D which means that the dimension of the space is D to the I. A local Hamiltonian, which we write as a sum of, of local terms indeed, and a large class of decent of systems and under this, under this conditions is characterized by the fact that if we take, if we consider the correlation between the constituents of the lattice, we find that decay very fast in, in, in space, almost exponentially or, or, or power low in some, in some peculiar case. And this means that if we consider two subsystems of our lattice A and B, and we concede and we compute the, the, the, the reduce density matrix for a system in order to compute the entropy, to characterize the entanglement present between these two subsystems. There is a class of law called a real estate in that the, the entanglement between the two partitions, a, a, a, a, and B scales like the size of the boundary between the two subpartitions in a one dimensional system. Good news for us because this means that it is constant in the, in the size of the, of the system as the boundary does not depend on, on L, but instead, and, and this is what will lead to the core of our work. So we deal efficiently with the case of a two dimensional system where indeed, if we consider a sub partition of the system and the complementary one, the boundary between the two, the size of the boundary between the two scales like the side, the linear size of the, of the, just, just one minute I get a chat from Salim Yusuf who says there is no audio in the theoretical presentation. Is it just with you Salim or it's with all the other. Sorry. I didn't get. So is there an audio for all the other students and it's just a problem with with Salim Yusuf or. So it's not a question to me. Okay, so I am sorry for stopping your presentation. Okay, sorry. Can I continue. Okay, okay. So, so at the end of the day, we must find a way to efficiently represent a quantum anybody stays living on a two dimensional, for example, lattice, which is able to, and to, to well capture the area low the system and and an answer is given within the framework of cancer network medus and I will introduce them by talking about matrix product states. The tensor networks allows an efficient compression of, of the, of the, of the information contained in a, in a quantum system, indeed, let us come back to the one dimensional lattice, and let us consider a generic state state psi on the lattice, the coefficients, which, which are needed to represent this, this state can be seen as a rank L tensor, where each of these like as a local dimension, which is the local dimension on each side. And this is a very inefficient way to describe the state. The answers that we can do is this one that instead of having a wonder man, a rank L tensor, we can represent our, our, our state our answer state as a product of local of local where we have introduced an artificial parameter, which is called the bond dimension, which we can tune in order to, to control the amount of information that we are including in the description of our system. So I'm thinking about these M into into way the first is that this is a sort of a, of an interpolation between a mean field description of the state where each site is fully considered as a, as a whole and we are completely neglecting all quantum correlation, and the full description of the quantum on, on, on the other hand, if we take a sub partition of the system and we divide it between the systems a and b, we can see at the bond dimension M as a truncation of the smith of the smith the composition that we will need to describe the, the state psi in terms of the states related to the subsystem a and, and the, now let us recall that the entropy for a one dimensional chain is a constant with the, with the system size and for what I just told the entropy scales like the logarithm of the, of the bond dimension and, and these, and this, and this relation here tells us that for one dimensional system. Our life is more or less easy because we can find an optimal M such as we are, we are able to describe the entropy independently on the, on the, on the sides of the, of the system and thus for example allow us to solve a variational problem for the search of the ground state energy, where we take the, the states psi and PS, it's a, it's dagger, and we send the cheat with the Hamiltonian written in a proper way by the user of, of, of a, of of, of, of local times or itself as they're called matrix product or operators. But now things changes when we ask ourselves for two dimensional system which is the, a good geometry that we might choose first of all, let me say that there are a plethora of proposals, each of them characterized by pros and cons in terms of the quantities that are able to compute within a certain geometry such as the local observable on the entropy and also the computational complexity of, of deriving for, from choosing a geometry or, or another for example that one shows here is a very intuitive one for, for the, for the presentation of, of, of two, of two dimensional system the aspect which stands for project and entangled pair, pair states which on the one hand is good for studying two dimensional system because you can see there are as many legs along a boundary of a given sub partition as the side as the linear sides of the all the system so the entropy is well captured, but on, on the other hand, the complexity, the computational complexity for optimizing such, such, such structure is quite high with respect to the one dimension and another structure that and, and, and it is the one that we are going to focus on is a binary tree tensor network structure. We can see that the main difference between these two, these two structure for example, which is that for example here, we don't have just a single layer of tensors like here, but we have several tensors around in a, in a, in a tree way and in a time that we go a layer above, we are in individuating some sub partitions in the, in the systems connected via these links here, and until we arrive to the top link here which is the only link that connects to half of the, of the, of the system. Our, our self what happens when we want to compute the entropy for this system so in the, in the same way as we have seen for the one dimensional system we find the two sub partition and in order to compute the the entropy. In order to compute them, we need to, to cut, we need to consider the, the, the amount of in, of in, of information encoded in the, in the top most tensor of the, of the chain, but here we have a problem because we have only one tensor with one, with one dimension M that has to contain all the, all the information spread along the sides of the, of the, of the system. So for this reason, even though three tensor network are good because they are computationally efficient, they are good for studying critical systems as they are scaling by and in this shape they are not good for the, for the, for capturing the entropy and capturing the area low in entropy. But here is, is where our, the core of our work is contained and as I will, I am going to introduce you to what we call the augmented three tensor network. So, let, let, let, let consider this is the lattice on a, on a, on a top view, on a top view. This is the tree here I am considering just a layer but consider this as a two dimensional binary tree tensor network. And these orange squares here represent the Hamiltonian of, of our system which are attached each of the, of the local parts on the, on the tree. What we are doing is to insert an additional layer of operators that we call the descent tanglers and we are going to put them between the operators of the Hamiltonian and the tree tensor network state. We are not putting them all over the tree, we are putting them in, in particular position because as we have seen before there are some portion of the, of the, of the two dimensional lattice which are described worse than others in terms of the entropy and for example, look here because here we have the two half of the lattice and all the, all the correlation is contained in just one link. And for this reason we want to place these additional tensor that can enforce the amount of information that we are able to encode here along this, this boundary here. And these, and, and, and in this way we can estimate the amount of additional information that we are providing the description of the, of the system so the, the, these are each single disentanglers is, is related to a couple of sites with the local dimension the, and this means that the amount of, of an, of entanglement with which each disentangler can participate the, the entanglement of this, of the state is, is the squared, and this means that if we have K new disentangler for each layer of the, of the, of the tree we can derive an efficient dimension that we are gaining. I am saying for each layer because you see that here we are, we are helping the encoded on of the information with respect to these links, but at the end of the day, going each layer below we can find the weaker, the weaker points in the, in the, in the lattice and enforce them with the reason thunder, but I will come back to the point of how disentanglers are placed later. Now I would like to sketch you how then the optimization scheme goes because at the end of the day, the usual, what one usually does when, when optimizing when looking for for example the ground state of an Hamiltonian with the, with the three tensor network is, is optimizing the, the variational state with respect to the Hamiltonian and this is, and this procedure is done iteratively. The convergence is, is richer. Now instead we have an, an, an, an additional step because we are not dealing with simply the Hamiltonian of our model, but we are dealing with an efficient and an effective Hamiltonian consisting of the original Hamiltonian plus the disentangler that we have put along, along, along the, along the lattice and I don't, I just mentioned but it is not important that this tensor can, can split and at the end of the day, we cannot obtain an effective Hamiltonian in the, in the same shape of the, of the original one. But the point is that the alternative procedure for the, for the search of the, of the ground state energy and, and the states now is made up two steps the first in which we are, we are, we optimize the content of the disentangler. Tensors and the other where, and so we keep constant the, the three tensor network state and the other instead in which we are update the effective Hamiltonian with the disentanglers and then we are, we are, we optimize the three tensor network state. As I told before, I would like to, to say something about how do we put disentangler along, along the lattice because I sketched as an example that we can, we can place them. It is worth placing them along the weakest boundaries of the, of the lattice but on the contrary placing all the possible disentangler wherever we want is not efficient in indeed the main rule we, we, we are, we attained is this one. We consider two sides that are connected, for example, by, by a local term of interaction, we do not put two different disentanglers attached to the, to the two sides why, why do we do this because from a, from a computational point of view we want the optimization part, the, the, the, the part in which the disentanglers are, are optimized. We want to be run in, in parallel, which means that the optimization of each disentangler must be completely uncorrelated from the optimization of, of the others and this will be impossible if they will share any, any interaction term to be more, more, more specifics I will show in the following the results of, obtained for two, for two models raising and Isabel Gamiltonian and, for example, you see that given this, this idea, for example, we don't, there are no, there are no other, other disentangler are attached in this area, following this, this rule here. And so you see that the disentangler that I mentioned when I introduced them are this one, and then the, the, the procedure with which we decide where to place them is going from the top layer to the bottom one and place as most disentanglers as we can, along the boundary that are, that are individuated from by the three layer, layer by layer. So this one, and then this one, and then these ones, and then, and so on so forth. And the slide went on by itself, and I want to, and I wanted to add that it is possible to extend this procedure of, of course, to, to models where the, the range of, of interaction is, is, is larger as it is the case, for example, for a lattice of, of interacting with the, with the atoms. And indeed, it is just enough to put less, less disentangler. So let's say that as each disentangler brings a contribution to the, to the capturing of the entropy of the, of the system at a certain point for very long range system. This, this approach becomes not, not very effective, but indeed, it helps to improve the precision of the simulation when we reach large size. And indeed, now I want to show you some of the, of the, of the, of the results of our, of our simulation. And I would like to start from the ISIM model in, into, into deep. So here I, on the X axis, there is the bond dimension. And we expect, of course, that by increasing the bond dimension, we approach ourselves to the exact, the description of the state so we expect to obtain more and more precise results. So the error is computed in this, in this way. When we compute the ground state energy for different values of the condimension, we can extrapolate a value that will correspond to the, to am going to infinite. We assume this value to be the, the true value for the ground state energy. And we take the, the, this error here as the difference between the energy at a given bond dimension and the energy at infinite bond dimension as we have estimated. And this is the case for an eight by eight lattice. And we see that both the standard, the three tensor network answers and the augmented three tensor network one more or less go give the same results. But things change a lot when we go to much larger size indeed, when we consider 64 times 64 size, we see that the augmented three tensor network allows to, to provide a more, a much more precise description of the, of the, of the ground state of the, of the model and reach a lower error. The other case that I would like to mention is the Eisenberg model that we have studied the criticality. And from this plot, these, these results here are the, are the state of the arts situation in the, in the, in the field of the of simulating the ground state energy of, of the December model and the, the errors are taken with respect to the, the quantum Monte Carlo results and at least up to, up to here and do and what we see is, is two things that of course, if we see if we look at the the three tensor network approach, we can arrive up to 32 which is a size for which there is no actually neither quantum Monte Carlo. But when we consider the augmented three tensor network we see that in this way, we can improve most of the technique commonly used and for and, and moreover we can reach unprecedented size in the, in the, the, the simulation. Now, I would like to talk to you that on about the, the second, the, the, the other application of our method that you have considered and I would like to, to kind of in, of introduce it a bit. So, as, as we have focused on the, on the, on the read the atoms platform, which, which is a, I, I consider now the one dimensional case for explaining this, we have a chain of linear atoms that are originally trapped in a, in a ray of a, of optical distance between the, these atoms of the, of the order of microns which means that is a very large distance, and this means that the single control of the level for the atoms is a, is a, is a, is actually available. But by means of a properly combined laser beams, we can establish an effective rabbi frequency, which allow us to excite these neutral atoms to highly excited rivers stays which means the principal quantum numbers, or the order of 60, 60, 70. And when the atoms are excited so much, they are able to in, to in, to in, to interact among, among themselves strongly, even if they are at the, at the, at the, at the large distances that I mentioned before. So the ramiltonian is composed by the rabbi part which couples the ground and the excited river part at the tuning, which is able to answer not the, the occupation of the, of the atoms in the, in the, in the river state, and then, and then, then the interaction between the objects in the, in the case of the, of excited isotropic states scales like one over air to the, to the, to the six. And an important feature is, is that these, these interaction here are very, very local, they are, they, they effectively affect the nearest neighbor or next nearest neighbor but no more. So strong that if a couple of atoms is close enough, they are, they are not able to be excited at, at the same way, this kind of platforms is nowadays very, very well studied because in a, in a very recent experiments for example it has been possible to study exotic quantum phase for here we're looking at results for a quantum simulator of 51 atoms trapped in, in a array of optical tweezers, or it is possible to arrange the neutral atoms in very fancy geometry is just a show off of the, of the Parisian group, the group in Paris that realize this experiment just to show how it is possible to realize several possible geometries and at the end of the day but this is very important nowadays, it is also considered to use river atom to realize quantum gates for, for programmable quantum, quantum computers. And that's the reason we thought to simulate to compute the ground state of a two dimensional river atom lattice so the Hamiltonian is the same as before just we are looking at the two dimensional lattice as I told before, the, the range of the, of the interaction is in principle infinite and we need to manually truncate it and we are truncating it up to the fourth nearest neighbor so that this is the largest in, in interaction that we are considering. And we want to check at the phases that that you that we find as we change the distance between the atoms with which means changing the, the interaction, and the tuning delta, while inside the rubby frequency omega is a character constant, which means that for each as a as we change the, the distance between the atoms and their interaction we will observe different phases as we change that blockade radius in terms of the, of the space space of the spacing of the atoms in the lattice, and this is the ground, the phase diagram of the ground state that we observe. First of all, there is more the tuning there is a, what's this called a disordered phase where no localized excitation appears but instead, as we move to larger values of the, the tuning for low values of the, of the interaction we observe as that to a chess board. phase configuration of the, of the atoms, while instead for. For atoms closer, which means higher, higher interaction energy. Also the diagonal, the diagonal positions are forbidden by the blockade radius and we find this Manhattan configuration of localized excitation in order to more quantity. The characterizes these phases we have computed the static structure factor, which allows us to, to distinguish these two phases for the peaks that emerge because we see that the Z two phases characterized by this peak in the pi pi and. Okay here, but in the in the Z four phase where the characterizing peaks in the, in the zero pi and pi zero positions. Further, further study of the, of the phases that we have, that will observe can be fine, for example, by looking at the staggered magnetization in which we see these, these staircase behavior corresponding to the disordered, the two phases. And we can characterize the type of quantum phase transitions by looking at the second derivative of the, of the energy with respect to the tuning sorry. I am considering the behavior along with this, this line here, sorry. So we see that there are two phase transition of the, of the second order, and in order to proceed to place the transition points, we have performed a finite size, size escape provided the errors from comparison between this estimation, and yes, and the estimation obtained from the computation of the derivative. And the very nice fact is that after that we put on online this, this work, the experiment has been done really in a, in a, in a, at the Harvard laboratories and they have, they have been able to study the phase diagram of 16 times 16. And they have, and they have observed this phase here that they have called the checkerboard, this one here, and the striated ones, which is this here, here in the, in the phase diagram, and, and they, and they, and they combine with those that were that you have predicted in our, in our, in our, in our calculations, but instead we didn't, we didn't reach the interactions in large enough to observe this other phase here. So now I can, I can conclude and I hope that I convinced you that tensor network can allow, allow to tackle problems which otherwise cannot be easily handled, that in particular we have focused on three tensor network and augmented three tensor network for studying high dimensional problems. And the, the open questions that remain from this work is the fact that while for three tensor network, it is easy, the computation of the, of the entropy. This is not the same for the augmented three tensor network, we know that the state describe better the, the, the entropy of the, of the system but it is not computationally efficient so far they call the actual computation of the, of the entropy. And the next step that we want to talk is the, is the, is the include the possibility to compute dynamics in our, in our system, and I have concluded and I would like to thank you for your attention and I will be happy to answer your question. Thank you. Thank you for a very nice and clear talk questions. So, I will start. So, could you please explain again why for a long range system the augmented tree are having more, more unentangling measures makes it worse. So I come back to this slide here. So, as I said before, we need to place the disentanglers inside in such a way that there are no two different disentanglers on couple of sites connected by interaction terms. So this means that for example, here for nearest neighbor interaction, we can have, they are eight disentanglers while inside, due to the fact that here, the interaction range is within this blue area, I can put less of them. And at the end of the day, the amount of improvement that I can induce in my, in the description of my, of my state is less. This is the, this is the, the point, but it is not that they, they don't work simply they, the efficiency of this, the, of this technique. It depends on the, on the balance between putting as much as disentanglers as you, as you can. This is the. Okay, makes sense. Another maybe stupid question. You talk about the range of the interaction but does it really also, I mean, would it be the same situation if you have a longer range of hopping in a system somehow. You are saying if instead of having diagonal long range in interaction, I would have had a long, a long range hopping. Yes. Yes. Yes. In any way, yes. Yes. Yes, it is the same. When I say long range term I mean whatever kind of in, of interaction there. Yes. Thank you. If I can I wanted to ask you something, let's say a more technical aspect. When you perform the contraction of all your indices of your tensor network in order to compute things. Do you perform the contraction exactly or do you throw away, do you perform some approximation to keep things manageable. No, you have to, you have, okay, there are two. Okay, let's see. Okay, so it is, it is a standard that that when you want to, to optimize a three times or not working on a or an MPI setting of the day, you truncate up to the maximum one dimension value that you have fixed. In particular, here, in, in, in principle, we might also want, we might also consider to enlarge also the, the, the size of, of this link here when we split the, the here, for example, we, we do not increase it at all. So the, the size of, of this link here when we truncate them is fixed at the beginning that it does not depend on the, on the maximum one dimension that we, that we consider if we didn't do this. Okay, we will end up in a, in an exponentially large one dimension and so the, all the efficiency of the simulation will be lost. Okay, thank you. Maybe if I may. Yes. So, did you try to compare this method with several different other techniques and see from a, okay. We are, we are, we, we started from, we started from three times or networks as, as the complex as the computational complexities of the different geometries is, is known, for example, the computational complexity of treating with the is the, is the bond dimension to the time, while instead the complexity of three times or network is the, is m to the four, and this is one of the, of the most efficient geometries for, for three times or network, apart the limitation of the, of the full feeling of a real understanding of why we started from three times or network and the, and the, and applied the, these entangles to them. Okay, but so you didn't like pick one example, and have the, the several methods running or compare with the late literature and compare with the literature. Yes, it is what. Yes, it is this, this, this plot here, for example. Okay, okay. These are the non results. Yes, of course. Actually, let me, let me add that this, this simulation even if we ran them on standard the cluster, which means one, one node and so on and so on. The 32 by 32 C, you know, nations take times of the order of, of months so computing all the ground state and so on. So it is not feasible to take all the, all the, then you have to, you need to have an, an optimized code for all the techniques. Yeah, no, but okay, okay, yes. Sorry, I need to raise my hand like this because I don't have the button. I don't know why. About this plot. Do you, this is the anti ferromagnetic isopropane model. I see. Yes. Do you have to impose any kind of sign structure on your on your tensor network. No, because I find them. Thank you for saying that because the fact that tensor network are not affected by the same problem is a, is a great, great advantage of, of using this, this is this kind of technique. You will see it, you will see it in tomorrow in the, in the talk of, to set the money reports. Well, I would say it's actually more than that because variation Monte Carlo done with the EPS for instance that you mentioned here, also is not affected by the same problem, but results are significantly in the sense that you can simulate fermions or frustrated that you will not have any problems, but results are significantly improved. If you actually specify in your answers which is something that you can do any PS, for instance, the structure of the signs of your problem if you know it already as in the case of the Heisenberg model. We can implement symmetry is and yes there is a, I didn't go into into these details but then the simulation can be improved also on easy easy is improved already also on this, on this level. Okay, so there is a possibility to actually superimpose a strong assigned structure from somewhere else and then have the, the, the program opted the code optimize other aspects of the thing because that is something that is routinely done in variation on Monte Carlo when possible. It does improve the results a lot, for instance, the numbers of EPS that you show here I think are done without this thing if you actually apply it to the number that the precision should increase a lot. Thank you. Thank you for the comment I don't, I don't know that the details for all the. There are a lot of techniques. Now I just wanted to know if this was something that could be done for the for the augmented tree dancing networks. Thank you. I have one question. Yes. Hi Simone. I want to ask you are showing comparative results between the ATT and other methods. I think it would be also interesting if if you have any comments. It would be interesting to compare different choices of positioning of the dissent timers you also talked about this for. We have, we have. Yes, I didn't, I didn't go into. I understand from this, from these graphs, there are various choices, very different choices that you can make of where you put them. And some of them will be better than others and one question is if there is, if there is an optimal way or somehow how would you guess. Thank you. Thank you. Yes, thank you for the answer because it is something that that we have to stress in, in, in, in, in adjusting the paper so there are some, some, some strict criteria that that are the following the one of the of the interaction that I said, and then the other is the fact that we want to privilege the to put dissent tanglers on the, on the, along the boundaries of the top most layers of the, of the tree so if I, if I go here you see that the top most link is cutting this, this bipartisan here, then these, then these, these, these, then for each link, there is, there is the, these two couples of this, these two times or divide the respectively, the parts a and b into, into two other subsystems and so on so forth. So, at each layer, we have a boundary, we have a set of boundaries of, of, among all the bipartitions that are, that are individuated by the tree structure and our, our criteria is to place the dissent tanglers wherever possible, starting the boundaries related to the top most links, and then going down so for example, if I, if I now go back here, you see that for example, if you imagine that, that now we place the tree on, on this lot is there are boundaries here, and there are sub boundaries here, and then here, but instead of putting the dissent tanglers on these lines here first because they are, they look at small subsystems which are better described within the usual one dimension that we have at our disposal. For example, we, we prefer to, to announce the, the, the information content of the links that connect the larger, the larger by partitions, then, overall, given these rules that I just told you there are, there is some ambiguous, ambiguous, ambiguous, for example, this dissent tangler here might have put here, and so all, all the others might change accordingly, but the results are not, are not significantly affected by these small changes in the, in the zone. Yes. Thank you. Any questions or comments. So if not, let's thank the speaker again, and this concludes the session for today. Okay, so I think we can go I wanted to personally thank also on behalf of the air I'm pretty sure all the speakers and the people that were involved in the discussion today we had lots of interesting physics going around lots of questions and no threats, which is always a nice thing. And I, we will see you tomorrow morning the first talk is at 10. So it's a bit later than usual. And, and that's all. So have a good rest of the day and see you tomorrow.