 i gael gwnaeth yr unig, Robert, yn ymddill gwlad آith yn cael ei gynnal i'r sefydlach, to bench ar hyn o iechyd, gallwn i'n dysgu. Adnw i'n rhan, bod nhw'n fawr, yn ei bhwy Šteig, rhaid i'r ffordd i chi'n rhaid i ddau'r teimlo i mewn tawr Cneilau, a rhaid i'r rhaid i gael y ffaith yn edrych i'r wybod arall yn mae'r gyfan y Llywodraeth hynny'n geisio yn twfant hynny. Ond hynny'n ganddeu'n bydd hynnu'n ysgolion ar gyfer ar y dyfodol. I'm trying to talk about is something that combines two of the ideas that we've been talking about at this workshop. So the ideas of using integrable one dimensional models with a truncated spectrum and also the idea of using matrix product states. But basically stapling those two ideas together to try to study two dimensional systems. First of all, what's the motivation? Obviously, we'd like to learn something about 2D strongly correlated systems. I apologize in advance for my handwriting. We'd like to know something about 2D strongly correlated physics, not least of all because of experiments. Obviously, there are experimental realizations of layered systems or systems that are cosy 2D, or I guess even in the case of graphene systems that are truly 2D, but graphene is not a strongly correlated system, so maybe I'm not interested in it. So experimental realizations, so things like the cuprates, which have been interesting for a long time and remain extremely difficult. Although we heard from Philip Corbeau that there's some very exciting progress being made on the 2D Hubbard model. So things like cuprates, things like layered quantum magnets, things that are considered to be possible manifestations of things like spin liquid behavior, like cesium copper chloride in the past. Then we also have things like systems of coupled quantum wires. So you can imagine some kind of substrate where someone lays down many sort of channels that are individually 1D, but there's tunneling between them of some sort. And then of course there's cold atomic gases, so ultra cold atomic gases, but cold atoms. So in many cases in cold atoms, what you might do, you might have some kind of one dimensional trap, but usually what they actually do is they form lots of these 1D traps, many, many copies. So they have all of these kind of tubes and you can imagine that they can, well in fact they can, alter because they have such high precision fine control over their experiments. They can alter the coupling between these things so they can allow hopping between the 1D atomic gases. So there's some experimental motivation. We can also think though about the fact that we have all of these great techniques in 1D. So we have in one dimension, as I said, we have matrix product states. So 1D methods are basically sort of cleaned up in 1D. So we have numerical methods like matrix product states, which is just sort of essentially excellent in 1D. Then we have integrability, which admittedly does not cover all one dimensional models, but it covers a sort of library of many different types of model with many different universalities, many different interesting exotic behaviors. So we have in 1D, we have great numerical and analytical understanding. There's that handwriting. So this has given us some really good understanding of things that are sort of very precise understanding of exotic physics, things like spin charge separation, fractionalization. So what we'd like to know is if there's some way that we can extend the power of these methods, even in some limited sense to higher dimensions. So that's one question. Then we'd also like to think, in fact, can our knowledge of 1D help us decode 2D so if we have some system that we can actually simulate, we can study some 2D system, can our knowledge of 1D excitations somehow help us understand what's going on in 2D? And put another way, you could also say, how does the structure of these 1D systems, when we couple them together, change as we go to the two dimensional case? So these are sort of some of the questions that we're wondering about while we've developed, been developing this method. So first of all, well, we've had several talks about matrix product states now, and I think many of the experts in matrix product states have left, but some of them are still here. But Philippe Corbeau gave a really, really thorough coverage, I think, of matrix product states and tensor methods. But I just want to do a very brief reminder because we've had a day in between, and obviously this is not a matrix product states conference, not everyone knows about them. So just to remind people of some of the terminology, really. Okay, so the idea is that we have some lattice models, say, there are things called continuous matrix product states about which I know very little. But I think there was a question yesterday, I think Brian was asking something about continuum. There are such things as continuous matrix product states, so you may be interested in that. But what I'll talk about is on the lattice. So we have some lattice of sites, and each one of these sites, I'm going to give some local Hilbert space of dimension d sigma. And so we have some state, which, try not to mess this up. Okay, so we know that we can just write it as some superposition of product states. So if I have L sites, okay. So I can write it as a linear superposition of the states on the sites. And the idea of matrix product states is to write this in a slightly different way. And so I'm assuming open boundary conditions here. Is that still legible at the back of the indices and things, legible? Yep, okay, good. So my sigmas are good, let's not be rude. Come on, Robert, I'm British. You can't start randomly. Okay, so now I'm going to get nasty questions from. Okay, so the point is this at first sight, at least when I was a graduate student, or whenever I first saw this, this looked crazy, because it looked like I'd taken something that I understood and put it in a very complicated form that looked to me like it was going to be much worse. But the point is that this thing here obviously has some terrible behaviour. So it's an exponentially large object. The idea here is that in principle, to do this exactly, we would need at the centre of our system. So if we consider cutting our system in half, the size of the matrices in the middle of the system would need to go like d-sigma to the L over 2. But in practice, we look for the board rubber. So that's not what we do, because that would be silly. Instead, what we do is we restrict our matrices to have some maximum dimension called the bond dimension effectively, because it is the dimension linking two sites, hence the bond. So we instead have some maximum. And then instead, if we've done that, then we're going to have some polynomial number of coefficients instead, say something that looks like this instead. So that's a massive saving, but obviously we've lost some information by doing that. So then this is a kind of compression technique. We've basically compressed our wave function in some way. Sorry, I can't use the side black bullets. Thank you for permission. And so, yeah, I'll use them first. Right, so the idea is how do we choose this compression and in what way? And essentially the answer is we do it based on singular value decompositions and submit decompositions. And so just in case that isn't familiar to some people. So the idea is that we can take any system here, any system, and we can bipartisan it into two parts, A and B. So again, we can write our wave function using bases defined on A and B. But there is a transformation that we can apply. So we can apply a singular value decomposition to this C such that it becomes three matrices. So two orthogonal matrices and a matrix of what's called singular values. So this is a set of orthonormal columns instead of orthonormal rows. If I give it just some. So if I just keep the non-zero singular values, then already this is a square matrix. Okay, so now I can one more sum over S. So now if I perform the sums of A and B, what I've done is basically just rotated my bases. So what happens is I have some new representation in terms of some new states, which I will call, which I will label by S. But the point now is that the number of the superzitions that I need to keep is the minimum of the dimension of, well, actually how do I write this? Not like that probably. The minimum of the dimension of A, the dimension of B. So by choosing a different basis effectively, I can have a more compact representation already. But also if I look at the normalisation of this wave function, then this is just the sum over these singular values squared. So if I choose to keep the largest, say, chi of them, which is my bond dimension, then what I'm doing is I'm keeping the best representation of that state with respect to the Frobenius norm. So that's basically what you're doing. You do some kind of iterative scheme say, you treat this as some sort of variational ansatz for your wave function, you do some iterative scheme where you do lots of Schmitt decomposition, lots of singular value decomposition and you compress this state. Now, when does this work well? When does it not work well? Let's go over and use another blackboard. So we can sort of sum up the distribution of these singular values because obviously truncating them depends really on how they behave. So if there's some exponential fall-off between these singular values, then we might think that if we truncate here, we're not doing too badly. On the other hand, if we have a distribution of singular values that looks like that, clearly truncating is just not a good idea anywhere. Or at least it doesn't seem to be. So it's sort of an easy single quantity that can kind of sum up the distribution is the von Neumann entropy of entanglement. So based on the notation that I've been using, it should look like that. So essentially we sum over our singular values in this way and this is the von Neumann entropy of entanglement. So if for a product state between our, if our bipartitioning was such that actually we just had a product state, then this would be zero. On the other hand, if all singular values are equal and we have chi of them, then that's the maximal case, then we would find that we have Se log chi. So the maximum entanglement we can keep across any partition in this case is log chi if we have bond dimension chi. Okay, so then what's the problem? Well okay, so why does it work well in 1D today? It's really the question. So it works well in 1D because of certain theorems or proofs even about the behaviour of the entanglement entropy of gap systems in 1D. So I think it was already mentioned that the proofs go back to Hastings originally and then sort of I think there's a more recent improvement on the bound by Arad et al. But the idea basically is that in a one-dimensional gap system the entanglement entropy goes like some, well, the entanglement entropy is bounded, is, well, it grows like a correlation length in the system. So okay, and moreover, and even in critical gap, well in systems described by CFT, so in critical systems in general, the point is that this, if you take a bipartitioning where you cut out some system of length L in the middle then your entanglement will actually just grow like L. So there's like a logarithmic breaking of what's called this area law. So sorry, I should say what the area law is. So in high dimensions it's believed that in general there's an area law which is sort of obvious in 1D. In 1D the area law is just a finite number of points. If I cut my system up into pieces then the area law is just some number of points. If I choose to look at the entanglement of this region with the rest then the area law is just two finite points. In higher dimensions, if I'm looking at the entanglement between some region and another region then this boundary goes like some extensive length and so we have some area to our entanglement. So the point is that in 1D we have at worst this logarithmic growth which means that even in critical systems we can get something out by carefully extrapolation or carefully controlled numerics where we look at different bond dimensions but in higher dimensions we have difficulties because the entanglement tends to just grow like this like a length which means that we need to keep more and more singular values. Our matrices get bigger and we have problems. So what? Has it been proven in 2D for gap systems? I don't think so. No. It has not, no right. I mean there's plenty of work. It's true but there's plenty of evidence but there's no fundamental proof in the manner of Hastings proof for 1D. So Philip and Didier were telling us about PEPs and PEPs is explicitly constructed to capture the area law. And indeed it does very well in 2D systems however it has very complicated algorithms and quite a large computational overhead. And it also doesn't use our pre-existing knowledge of 1D which is what I'm trying to motivate here is that we can somehow feed our pre-existing knowledge of 1D systems in. On the other hand there's also taking matrix product states themselves and trying to make a 1D system like a 2D system. So all the other way around in fact. So we have some two-dimensional lattice. Now see why Philip used slides to show his pictures. It's easier to draw them. So I'll just draw some. So imagine that we had periodic boundary conditions here so this comes off and joins back on here. So the idea in this 2D normal MPS is that you form a snaking path where you map your 2D system back to 1D by kind of going through the system like this. But what's difficult about that and there are lots of work doing that especially people, Stephen White has done a great deal of extensive successful numerical work doing this but the problem is that things that were originally short range bonds in 2D have now gone long range and because they've gone long range and they're also having to carry all of the entanglement between these two sides like this we basically have a situation where the required bond dimension goes exponentially in the circumference of our cylinder. Moreover, at least in this particular manifestation the fact that we had translation and variance around this way hasn't been taken in, hasn't been used explicitly in the algorithm so there's no use of the fact that the momentum is conserved. And the conservation of good quantum numbers is extremely useful in these matrix product state algorithms because it makes them much more efficient. This looks so weird that it's even separating with somebody who decided to use it. Why not use other curves? Ah, okay. Okay, so people have worked on these algorithms and I believe there are ones where people do some kind of other path where they basically do a 45 degree rotation and sometimes that has better convergence properties. So actually the path does matter and you can choose different paths. Okay, so these are pre-existing techniques but the thing that we work on is a slightly different angle on this. I'm not lying my drawing. Okay. So what have we done? So what we do is in fact partly inspired by the truncated spectrum approach. So what we do is we say well we have some Hamiltonian. We think about having some Hamiltonian that is a sum of some H solvable plus some interaction but we happen to choose our H solvable so that it is in fact a bunch of uncoupled 1D Hamiltonians and then we choose something here that is a relevant interchange interaction. So relevant for the reasons that Gabor stressed on Monday because we expect that's when the truncated spectrum approach is going to work. So this seems like a nice idea maybe but there are immediate issues with it. So issue one, continuum 1D theories or continuum 1D chains will have an infinite spectrum and on the other hand if we work with lattice chains then at least when we go to long chain lengths they will have a very large spectrum. Neither of these are good because we're working with a numerical technique where we need the local physical Hilbert's space dimension to be finite at the very least and in fact the algorithms are more efficient if that's a smaller number. So I guess really that is issue one and two in fact because issue two is then that the scaling for example of like a DMRG algorithm which is an eigen-solver basically for matrix product states I think the scaling is for a what's called a single site algorithm goes something like this but there are sub-leading terms that goes something like this. So usually people think about this as the leading order term because chi is very large on the other hand if this the local Hilbert's space dimension is also getting large then this sort of term is also extremely important so neither thing is great. So the sort of partial fixes that we have here so one is as before for the truncated spectrum approach which is that we put our theory on a finite size system it's not as nice when they don't have the nice people to clean the board in between with the little squeegee and things is it. So fixes are because this introduces some some discreetness in our spectrum so we now have some energy level spacing some spacing between momentum modes at least that goes like some function of two well of one over r. So then the next thing to do is to make sure you're using conservancy so we want to take advantage of the fact that this length is in which direction in the direction of the it's the length of the chain sorry that's a really useless drawing isn't it so let me actually draw where are the one-dimensional chi's leave on this picture horizontally or vertically sorry vertically sorry so I'm going to at least the way I normally think of them sorry is like this so I take these chains and I couple them and each one of these is an chi an h1d okay and then this is of length r and the thing about matrix product state methods is that we can work with many chains n but you can also work directly in the thermodynamic limit with infinitely many chains so then we also want to use conserve quantities so we want to use chain momentum for example so we don't just have finite length chains we put them on a ring so we have periodic boundary conditions so really these things are not just these are actually rings so we want to organize our chain spectra by their momentum and there's one sort of final heuristic kind of wishful hope which is that so massive field theories in finite size the corrections to the masses the masses of excitations in these theories go exponentially in the length so when you sort of confine your QFT to a box you expect that the corrections to the masses will drop off exponentially and so the hope is that for some quantities at least we can get away with using a smaller r so we can sort of study this system on a relatively small ring well with relatively small rings infinitely many of them say and hope to get nearer the thermodynamic limit with a smaller r and the reason that the smaller r is useful is that the smaller r in this case now when we have our matrix product state we do a bipartitioning this r here is the r that appears in the area law so r is the area so the kind of hope is that this is like the finite size corrections and then we can use small r smaller r and hope well hope that we can use smaller r and then of course because SE is scaling roughly like r is having a completely out of control bond dimension ok so those are the that depends on the dynamics because you're now going to couple them to each other ultimately it's heuristic right it could be not true so it's a hope that's why I was trying to stress it really is kind of a hope that that might help you so really the proof is in the pudding you have to try it and it may or may not work and it especially may not work in the dynamics when you actually quench when you go from which I'll talk about in a bit suddenly and you watch what happens and they evolve ultimately you expect that you'll you've pumped a load of energy into the system with a global quen to say and you know ultimately you are going to you are going to change things dramatically eventually ok so the ingredients were we wanted some kind of solvable one dimensional Hamiltonians and we need to also know the matrix elements so explicitly the kind of Hamiltonian so to just to write it more explicitly the Hamiltonian I'm going to think about really the type of Hamiltonian I'm going to think about something like this so I still have my sum of i these operators so I integrate around the track chains my operators because these are defined on some continuum ok so and so I've drawn a an unpleasant sketch picture here but so the idea is that these things are not just coupled at some point to each other all the way around to each other ok so it doesn't work so the first test system that we've looked at starting originally when Robert first came up with this this idea in 2008 or 2009 he so operator OI is his dimension which operator sorry? OI hat OI hat has a pretty small dimension because we want this whole product OI OI plus one has dimension smaller than two so I mean I guess if we're looking at the IC model but we want a relevant interaction ultimately but for the IC model that's not a good sigma that's not a good sigma I apologise so now the dimension is one eighth so that's a small ok um yeah ok so the IC model I think most people know about the IC model but I'll just write it down just to be explicit um ok so the model on the lattice at least is this ok and when we go to the continuum limit we know that um actually we forgot the sign that when we take the continuum limit we end up with a free myron filled with a mass so this something like this so we end up with two parameters of velocity and speed of light which will set to one and a mass depending on the sign so if I draw it like this there is an ordered phase for positive mass order oh god right order and disorder that's not good some neurons have failed this morning they may never come back right so so we have this and then the idea is that we would couple them with some other interaction so now now I'll sort of put another index so j was my index along the chains i is now my index between chains and we'll couple them with some spin-spin interaction ok so then we want to study the continuum limit of this sorry so then we want to take the continuum limit of the 1D model, couple it together with things like this or the continuum operator version of it and then see what happens and so there are basic checks that you could do so one of them is that the you expect that the gap will scale like will have some scaling form just like this delta yes ok so that would be one check another check is ok so another check is when I actually draw the 2D phase diagram so we have delta we have j perp we have order and disorder so there is a quantum phase transition here for some particular value of the of the network if we're in the disordered phase we start with disorder chains and we couple them together with increasingly strong interaction and we'll expect a phase transition into the ordered phase so so those are two checks that we can do we can check for something like this the scaling relation we can also check do we see the quantum phase transition do we get say a critical exponent right or something close to the right critical exponent ok so put the screen on and I'll drag the boards out of the way thank you that's what you get right ok so here is a bunch of data that Robert took so he used a traditional dmrg rather than a matrix product state algorithm actually to do this sorry the names are all important here I'm not on this so here he basically showed that he looked at a bunch of different parameters and showed that they all collapsed onto some nice scaling curve so we see this expected scaling behaviour maybe more interestingly then he also looked at what happens as you increase j-perp when you start on the disorder side so you start with I think the masses minus one and then he increased the coupling between the chains and monitored what happened to the gap so in this case the gap you expected to start closing of course and that's what he sees so there's two different things here one is the raw data out of his dmrg for which I think this was 60 chains coupled together and they weren't particularly long I think r was 7 or something like that so it wasn't too bad but he already gets this value new equals 0.65 just with this raw data without doing any extrapolation in size and without doing any kind of like this is a kind of one loop rg improvement in the cutoff and this is quite good agreement with the acceptee exponent which I think is 0.63 so there is clear evidence that this is capturing something about the 2D quantum phase transition because of course the 1D quantum phase transition new is just one so it's very far from that and in fact if you go to really small r, if you make r really small then your spectrum kind of separates and you can actually recapture the 1D phase transition you can go back to completely different coupling the critical coupling changes obviously but also you'll see a different critical exponent it will go to new equals 1 so when I came along and started working with Robert and basically started MPSI'ing everything one of the first things we did is we started looking at the entanglement because people were interested in the entanglement content of these systems in particular people are interested in the entanglement content of 2D strongly correlated systems and Didier the other day was talking about this conjecture due to Lee and Holdain about entanglement spectra so here is a plot in fact of some entanglement spectra so this side here this is far from the transition this is getting closer to the transition and so what I can do is I can bipartisan the cylinder look at the distribution of the singular values or look at the values the actual singular values themselves construct this fictitious Hamiltonian so the idea is that there is the idea due to Lee and sorry that doesn't go any higher than that does it the reduced density matrix you can reformulate it as some kind of exponential of some entanglement Hamiltonian and then the conjecture was that in certain weird cases this entanglement Hamiltonian looks like the real Hamiltonian of an edge although it depends on what you understand to be the theory of your edge so in the case where we have some chains and we're far from criticality and they're somewhat weakly coupled and the entanglement we know is mostly area law so there's short range entanglement basically it's maybe clear or believable that the edge of your system really is a single icing chain so in that case what we see is actually quite a nice overlap between so the these open open symbols here are for actual edge, actual spectra of an icing chain collecting it into the various different states so the one soliton states being the remonde sector the lowest remonde sector and the two soliton being the lowest never short and then because you have access to actually the charges of the various various states represented by the singular values you can also see whether they're even or odd in the never shorts well if they're even they're basically never shorts like if they're odd they're remonde like these singular values and you can see that there's some pretty good agreement in terms of at least structure and degeneracies and we also did some kind of perturbative calculation of the entanglement of the lowest entanglement band I suppose you could call it I should also point out that this has been scaled because there's a kind of unknown constant here it's been scaled so that symbols overlap each other basically but there's also a perturbative calculation which happens to have a sort of large subleading correction at zero momentum which I haven't included here so that's why it disagrees here but there's quite good agreement at least with the perturbative calculation of this entanglement spectra which is another kind of check that we're getting the kind of right thing out on the other hand as you go closer to criticality it's not clear that your entanglement is short range it's not clear what your edge theory really is certainly there's much there's not really an agreement anymore between these different theories but of course it's not clear that the edge Hamiltonian in this case is an icing chain so that was just sort of about the entanglement spectrum in that case there would be something to do with the spectrum of the radars on the boundary probably should too that's known so would you be able to calculate this you'd be able to calculate some sort of edge CFT you'd be able to looking at the CFT that represents the edge in some way and possibly the 2 plus 1 theory yeah the 2 plus 1 theory when you take operators to the boundary depending on the boundary condition you get some boundary operators which have certain scaling dimensions and some of these dimensions are known so maybe you can try to make put them on this plot that would be nice well no with this boundary bootstrap correlations that Pedro at the end and Griozzi did I think they got it with many digits but I think the entanglement may come for such a half claim is the CFT in ringdler space is like a term the matrix of the CFT in some ringdler space time so I think it's not the spectrum in flat space not even with the boundary but something more complicated I think you've tried to explain this to me before and I didn't understand so maybe you could explain it to me again after the talk because yeah so then the next thing we looked at was dynamics so the idea here again to say we'll have quantum quenches so the idea is that we take some system in some initial state psi zero and then we evolve it under some new Hamiltonian under some Hamiltonian for which it is not an eigen state effectively so you can imagine it being the case that there's some h0 and then at t less than zero this one does go a bit further up and then at t greater than zero we suddenly add in some some new term in fact this is not what I'm trying to say at all okay so I'm trying to say is that we have some Hamiltonian h0 at t less than zero and then we have some new Hamiltonian for t equal to zero and so we take some state which is an eigen state of the original Hamiltonian and we evolve it in time under this new Hamiltonian and we see what happens so that's what's happened here so the easiest thing for us to do with this theory what are you v is something but I'm going to say what it is in this case so in this case what we've done is we've started with a load of uncoupled chains icing chains and then we've suddenly turned on the spin-spin coupling so we've gone from 0 to 0.1 and then what we're tracking here actually is because we know about the structure of the icing chains we can look at the density of fermions on an icing chain so this is just like looking at because it's translation invariant as well it's easy to sort of just divide by r and find what you expect the density of fermions at a point to be so this is like the density of fermions on chain i at some position of time so initially the chains are in their vacuum states nothing happened and then you've turned on this perturbation you've injected energy into the system and what you're seeing is these kind of is these modes of evolving and I suppose what I should say is remind people about this kind of quasi-particle picture so so there's this quasi-particle picture due to Calabrese and Cardi which kind of says that you have your system at T equals 0 and when you perform the quench the energy acts as a source of quasi-particles so you have these kind of quasi-particles moving off from different points and when the light cones from these various different quasi-particles intersect that's when you start to build up correlations between them so there's a couple of timescales that you might think you might see in this kind of plot one of the I've sort of scaled it by T delta but the point is that well okay so the most important timescale really in this plot scene as I'm running out of time is really just that everything's on a ring or not everything but the fermions are on a ring we create them and then they travel around and there's some time when they will touch when the quasi-particles create from the same point we'll see each other again at the other end of the ring so that's plotted on here basically everything seems to fall onto a universal curve with the right if you scale things correctly as you go up larger and larger in R so why are you not already talking between rings or is this a different scale between rings what about you're coupling them all the rings are coupled yeah so I mean energy can move from one ring to another it can yeah it can so obviously that's an effect as well but somehow as we go to larger R at least this is also an effect that will happen but I'm on an infinitely long system here as well so they're never going to hop from one end to the system to the other that way so at least one of the obvious timescales that I can talk about is this one so that's all is that okay and then you can see already this does significantly better than the perturbation theory and the reason I guess there is because this is actually a sum over the Navo Schwartz and the Ramond modes the perturbation theory at least at lowest order doesn't capture the Navo Schwartz the half integer modes at all they're just completely zero so this is already kind of capturing something they don't and you can see if you go to a weaker coupling of course the things agree again because these half integer modes just don't get excited very much okay so then something sort of looking at some pretty pictures okay so so this is a quench in the disordered phase actually looking at spin-spin correlations and so this picture on the this is not a laser pointer okay so this picture on the left shows correlations parallel to the chains as you go around this way this is correlations as you go along the system okay so here we see something that looks like a light cone right so I've subtracted off the t equals zero correlations just to make it more clear because the chains are I think this is length r equals 20 there are some correlations already at t equals zero and you kind of have to subtract them to sort of see this more clearly so the idea here was well the idea here is this is a disordered quench so okay so these quadruple's created they can move off as you said in both directions they just go around the ring but they also hop from chain to chain and you can see that quite clearly right on the right however there's an interesting well there's an interesting paper in 2016 by Cormos Calora Gabor and Pasquale in Nature Physics about looking at confinement real-time confinement in a 1D quantum icing model with a longitudinal field so I can ask another question before I get to that I mean one of the interesting things one could try to do is this kibble Zurich scaling that is if you twin yourself to I mean if you're close to you go through the critical point itself I don't know how close your j purpose to the actual point then you might try to okay so this gets very difficult as you get to the critical point and so in principle try to use the critical exponents of the 2D 2 plus 1 dimensional icing to predict the behavior of this scaling wise qualitatively the response well that would definitely be interesting to do in that case we can only really go to very short times when we do a strong quench like that because obviously what happens is your truncation becomes important the fact is we've put lots of energy in the system and higher modes so I guess what you didn't see on the previous plot is if I'd broken up into the modes that I'd summed up to get that you can see that higher and higher never short term on modes are less and less populated but of course they become more and more populated in a quench that's stronger and they get worse over time so it's more difficult to study a quench that gets close to the critical point or even crosses the critical point because you might expect this is not a great basis disordered icing chains coupled together you would think that maybe ordered icing chains coupled together is the better basis for ones you've gone through so it's tough so essentially I just haven't really got time so I'll just draw a kind of cartoon but the idea is that you have some icing model and normally if you flip a spin you create some domain walls and these domain walls can propagate independently of each other so you have some de-confined excitations that just propagate independently once you've done a spin flip on the other hand if you add in a longitudinal field what that's basically doing is giving a penalty to either up or down so the longitudinal field gives some penalty that's proportional to the length L of this flipped domain and that acts like a linear confining potential so what they did is they studied this and they saw very nicely that they could destroy the light cone basically by quenching into a phase where they had the longitudinal field so we've also what we've done here is we've started with an ordered instead of ordered chains and then quenched to a system where they're actually coupled together so we start with ordered chains but no coupling between them and then we quench suddenly to coupling to the system where they're coupled so they can hop between each other and so on the left okay so here I haven't subtracted off the initial correlations because they're quite strong and if I subtract this off I get junk but on the right this is actually the correlations as I'm going along the system and you can see that what was what was certainly something light cone like in this case doesn't really look like cone like anymore it's much more of a kind of well there is some growth but it doesn't seem to proceed out in light cone fashion it really bends back on itself so there's some evidence I think here of confinement as well in this 2D system of this picture that is very nice in 1D with a particular field applied in this case there's no field applied it's the fact that you have the ordered icing chains either side of your chain so when you flip a spin on your chain because you've got ordered chains on either side they provide the kind of linear confining potential so it's not a applied external field it's really actually a property of the phase itself okay so this is the very last part of the talk now so we might think also okay so I argued that part of the point here was how can we use our knowledge of 1D to understand 2D so one of the ways is maybe by looking at these modes on the icing chain and understanding what's happening in a quench based on those another way you might understand it is that we have other ways maybe of truncating our spectra so we know something about the 1D theory so we also know something about its matrix elements say if we're going to study it in this way so maybe there are other ways of truncating rather than just based on energy so in this particular case what we've done is we've taken lattice Heisenberg chains and we've chosen lattice chains because we can compare directly to sort of traditional 2D DMRG results by student Myron White for numbers and okay so first of all this case is anisotropic Heisenberg model so the chains are coupled more strongly than the coupling between the chains so this J along here which is five times stronger than the J between these and the thing about these isotropic Heisenberg chains is their spectra can be separated up into spin-on numbers so there's some numbers of particles that we can organise our spectrum by and what's happened here is we've looked at keeping just the two spin-on states and then keeping the two plus the four spin-on states and then calculating the ground state energy up in size and you can see that actually adding the four spin-on states really doesn't do very much okay so the two spin-on states have already got a huge amount of this energy and I think the reason that you I think you did this data Robert the reason that you haven't gone up here is this is just too large two to the eight was too large to keep all the states right yeah so in this case here and two to the twelve definitely was too large to keep all the states but the point is that you are really capturing a huge amount of ground state energy already just at the two spin-on level so there's an idea here that maybe we could learn something learn about how to use the structure of our 1D theories to truncate in a better way and then even going to the isotropic limit so okay so here n equals four and n equals eight there are actually all states and this is so again this is isotropic so j equals j perp in this case we've taken our two spin-on states then the two plus four and then all the states and then you can actually see that at the two and four spin-on states well okay n equals eight we haven't captured as much of this energy but there's a surprising amount of accuracy already in this number just by keeping the two and four spin-on states and that I guess is something to do with the way that the matrix elements fall off in spin-on number in the Heisenberg model so it's somehow particular models based on what the matrix elements you're coupling are even in the isotropic case of truncations that aren't energy based but are based on matrix elements or based on various quantum numbers that could kind of help you here okay and so that's really the conclusion so I didn't make a conclusion slide but so the idea is that we we think we can learn something about 2D in some cases by studying these isotropic systems of coupled chains and there's also some software available which you can play with and all the bugs are my fault except there's some built-in models and the bugs in those are Robert's fault so it's available and it's written in such a way that you can essentially drop in the spectrum and matrix elements of your favourite models and join them together and sort of try and study these things both in time evolution and static properties and you can take advantage of abelian symmetries at the moment what would be great in the future is to take advantage of non-abelian symmetries and I would massively improve the algorithm okay well thank you very much kind of question go ahead so it's more like a comment that it seems to me that you probably using this method you can probably extract lots of quantities about 3D ising model compactified in a circle which is what exactly what you're doing which are some universal quantities which are not currently known and which could be used to do non-trivial cross checks with the bootstrap about what happens when you because there's some very non-trivial consistency conditions which follow from conformal variance which tell you what is more or less what will happen when you put the easy model on the circle right and while those conditions in principle are known but not much was done about them because there was little like contact with experiment nothing could measure but perhaps with this setup that you have which is very powerful it seems one can start some interplay to the bootstrap and this calculation well certainly there are physical exponents which you measure it's just one quantity you just get it more or less but there are many other quantities which you could measure and which we don't know at all which could also be compared in a very non-trivial way okay well that would be great to do that I mean now that the code is somewhat more robust than it was as well I think there's probably a good chance of actually managing easier to approach Could you say something a little bit about how you detect this phase transition because naively I might imagine the disordered phase Hilbert's phase and the ordered phase Hilbert's phase are pretty different I mean they're different sectors of the Hilbert's phase right so if you look at the raw data here what you can do is get to some finite finite gap you can't get to the gap with this right and in fact in other data where I did since where you go past this what happens is this eventually becomes inaccurate and sort of tails off you get a finite gap so it won't come and close and so that's why Robert's I guess improved it in this way with this sort of RG in terms of the cutoff You've done lots of finite size scaling Yeah so we did finite size scaling as well so we also did scaling of the gap the real gap and we also did some finite size scaling of what's called the entanglement gap so this gap in the entanglement spectrum between here and here and there's a kind of conjecture as far as I'm aware of how that should scale with system size and you can detect the transition point very well in both and it agrees very well both this finite scale scaling of the real gap and the entanglement gap and that's kind of interesting because the entanglement gap is a property just of the ground state wave function the finite gap you have to calculate the ground state and the first excited state so yeah so you can I think you can see the transition here definitely but you can't you can't actually drive your system into it you can get it through scaling Have you tried doing the entanglement analysis on Heisenberg cos I think there's some subtleties there Not yet actually I haven't looked at the entanglement spectrum carefully at all since this actually No further questions let me let's thank Andrew again Thanks