 Thank you very much, Marcelo, for this introduction. Also, thank you to all of you for coming somewhat late at night, late at this day, to this lecture. So when Marcelo actually asked me to give a short presentation, I said immediately, yes. And the reason is that I'm a big, big fan of the ITCP. I mean, it's really an amazing institution. I mean, it has been working for so many years. And, you know, it's important as ever. I mean, so I was very happy to be invited. So thank you very much. So the task which I was given was to explain you into our something about the Carter-Parisi-Sang equation, which doesn't seem to connect very much with the rest of the program, at least as far as I could look at the various titles. So I'd like to emphasize that there is this sort of underground connection between what people just call KPC equation. I mean, Carter-Parisi-Sang equation and quantum spin chains. And so this is sort of what I'm trying to explain to these lectures. So let me, so first slide will be just sort of telling you even, I imagine that nobody has seen the KPC equation before at all. I mean, so let me just tell you very briefly what it is and why it was invented. Of course, the names are very famous, but it's a very specific work of theirs. So I'm going to look at this. It will be spin chains, I mean, which is obviously one-dimensional. And correspondingly, the KPC equation will be actually one-dimension. You can divide this down in any dimension. And of course, that's an interesting topic, but I mean, I'm just working here with this. So when I talk about KPC, I always mean the one-dimensional KPC equation. And the original question, which Carter-Parisi-Sang asked, was a very simple physical question. So you imagine that you have a thermodynamic system. And you prepare this. And it has a sort of stable and meta-stable states. And you prepare the system with an interface which touches the stable and the meta-stable phase. So like, I don't know, fluid and super-staturated vapor. I mean, that would be sort of one example. I will have a more physical example a little bit later on. And now the question which they ask is, I impose this interface. And now is this interface moving? What kind of fluctuations would it have? And the main observation is that when you are at the interface, then you can very easily nucleate a meta-stable piece into a stable one. I mean, if you are far out, I mean, the system is meta-stable. So it will stay in that particular state forever. But at the interface, you can very easily do the transition. And so what you will see is sort of like a net motion of this phase. I mean, there will be a net velocity which means that the stable phase sort of takes over the meta-stable one. And for that motion, they sort of wanted to write down an equation. And in order to simplify things you see, you can think of domains or complicated things. So let's just imagine that this interface is given as a graph of a function. So this is my function h. There will be various h during this talk. But here's my h. I mean, that's a function of x and of time, one-dimensional x, one-dimensional time. And you want to write down an equation how this interface is evolving. And well, the idea was sort of, you know, at first, I sort of thought about the conventional. I mean, you look at the time change of h. I mean, that's sort of an evolution equation. And then what you would say is that, look, I mean, there is at the interface, you will sort of randomly nucleate. But you will randomly nucleate more or less independently. And you will do this throughout space and time. So I put here a white noise, which is space-time white noise. So it's an x and t. And it's white in both variables. Well, and then, you know, there must be surface tension. I mean, there must be a mechanism which sort of ensures that this interface is not just getting immediately, you know, sort of extremely rough. And so this is usually sort of embodied by putting here the Laplacian. And if I leave out the non-linearity, I mean, you see, you get a linear Gaussian equation, which you can solve, in which, in many cases, surprisingly, actually sort of works extremely well. But what they realize is that if you work in this one-dimensional context, and if you have sort of the systematic rows, this is not what will work. I mean, you really have to put here well. I mean, could put here a linear term, just dx of h, which sort of gives you, of course, a motion. But they realize that it's important to have this non-linearity. And so, you know, one can argue, and I don't want to do this here, but they just wrote down that in order to sort of include the systematic motion, you must have this very particular type of non-linearity. So it's a stochastic partial differential equation. Now, this has nothing to do with quantum mechanics. And, of course, it will never have to do anything with quantum mechanics. But the surprising thing is that there's sort of a more mathematical identity, you know, sort of that things which appear in quantum spin change can be actually predicted to fall into the same universality class as this equation, right? I mean, so this is what I'm trying to explain. I'm not claiming that there's any connection in terms of sort of direct physical connection. It's sort of an indirect theoretical connection. So what I would like to explain you is how to connect this kind of equation to quantum spin change, okay? Now, I divided my lecture into two parts according to the two hours. So the first hour will be connected with Euclidean quantum mechanics. And so I'm going to look at the propagator e to the minus th. So t is real and h will be our quantum Hamiltonian. And then there will be the second lecture which will be concerned with physical time where I make, so this will be two very different connections where I look at the unitary semi-coup e to the minus i th and I will connect the quantum dynamics, some properties of the quantum dynamics to the KPC equation, okay? Now, literally, that's a little bit hard. You see, it's a subject which has been sort of produced an enormous amount of interest. I mean, four years ago, Martin Heierer received the fields metals in mathematics. And the title of one of his award-winning papers was solving the KPC equation. So, you know, if you're interested in very abstract and pure mathematics under the title of KPC, you can go there. But then you go to the other extreme, I mean, the experiments which I sort of mentioned briefly also on the KPC equation. And in between, you know, there are lots of interesting stuff and still things going on. So I put here two titles, sort of something which I wrote, I mean, you know, if you want to have sort of some sort of introduction, if you want to sort of look up more literature, but there's plenty and there are books and all kinds of things. I mean, I put sort of two lecture notes which I've written. Okay, so now let's see the first thing. So now I first want to connect KPC equation to a Euclidean quantum mechanical quantum spin chain. All right, now, ask me any questions. Yeah, I'm sort of not completely sure what exactly is your background. I mean, myself, I sort of have been working for many years in statistical physics and sort of mostly non-equilibrium phenomena, also a little bit of quantum mechanics in any way. So, I mean, just ask me if there are questions. Okay, so the first thing is I have to first describe you what the physics is related to this first linkage. So now I'm going to look at equilibrium shades of 3D crystals. I mean, this is a pure problem of thermal equilibrium. And if you want to have a simple physical realization, you imagine that you have some hot substrate and you put whatever, later, you put some substance, a troplet of some substance on top of this substrate. And I mean, you know, I mean, this is a solid. I mean, so, you know, below the melting temperature. And what you will see, if I'm sort of at least, you know, close to melting, I mean, you will see basically a troplet that has a particular shape will be bounded. And then, of course, you know, it's an interesting equilibrium problem to minimize the surface tension so you can actually compute the shape of that troplet. Okay, now, as I lower the temperature, there will be one interesting physical phenomenon, namely that at some critical, at some particular temperature, I mean, you will actually develop, it will be no longer like a sort of smooth troplet, but you will actually completely flat pieces. These are usually called facets. Now, facets is actually a piece of your crystal, which is sort of flat, I mean, sort of on the atomic scale. Of course, once in a while, you might have a little error. I mean, you know, an atom is here missing, or somewhere else, I have an excess atom. But otherwise, it's sort of really completely flat. So when you're asking what are sort of the height fluctuations in your facets, it's really something which is of order one. It's sort of like something which is absolutely a perfect ground state if you want so, and then there are sort of little bit of errors on top of that, okay? Now, in the rounded piece, one can argue that, I mean, you will see, I'm interested in the shape fluctuations. I mean, so I'm discussing at the moment just what is the macroscopic shape. And eventually, so I want to do statistical mechanics, which means that I want to see something about the shape fluctuations. Now, what one can argue fairly convincingly that if you see in the rounded piece, it will be quite universal. I mean, what you will see is sort of basically a simple Gaussian fluctuation theory. And when you work out what would be the covariance, then you see that you sort of discover that it will be the inverse of the Laplacian. So it's something which people call a massless Gaussian field. So it will actually have logarithmic fluctuation. It doesn't fluctuate very much, but sort of has logarithmic fluctuations. Now, of course, there are coefficients which are related to the curve, so let me let this out, okay? So now there's one other further interesting prediction. Namely, you see, you could ask yourself, here I have my facet and here I have the rounded piece. I mean, how does the rounded piece enter into the facet? Now, this was done in the 70s, and it's usually people call it PT. I mean, so this is the Prokofsky-Talapov law. And what they argue actually that it will enter, you might think it will be just quadratic. I mean, that sort of looks like the nicest thing, but in fact, it will enter with the exponent R to this we have. You will see this exponent sort of coming up over and over during the talk, okay? So that's what we know about the macroscopic shape, and now in order to connect to the KPC equation, actually I want to study a final fluctuation property. Namely, what I'm interested in is actually what are the shape fluctuations of the facet? You see, if I look at this thing from the top, so I will see my essentially round facet, so I'm not looking from the top, I have here this facet, and then I have level lines sort of which describe further parts of the crystal. So there will be one step below, and that, et cetera, levels of this height profile. And this means that if there's a facet here, there's sort of a last line with what is this facet, but this line, of course, will fluctuate. I mean, there's no reason why it should just be completely fixed. And so the question is, what are the facet edge fluctuations? That's sort of a more refined question, which I'm asking, all right? And now let me show you a picture so that you can visualize what you do. So let's imagine that I'm looking now at an icing model in the lattice glass language. So I have atoms, and they have sort of this attractive interaction. Imagine that I'm at very, very low temperatures, and imagine that I have n-cube atoms. So n is an integer, it's reasonably large. And now you ask yourself, what is the state of minimal energy? Well, everybody knows. I mean, that's a perfect cube, okay? But now suppose I give you n-cube atoms and I take away like, I don't know, 10,000, right? Let's say n is already itself 1000, and I just take away 10,000 atoms, okay? So it's a small fraction of the total volume. And then you ask yourself, under the equilibrium measure, what kind of sort of shape will I see? And this is a computer simulation of what you will see. So you should think of this sort of like a cube, right? I mean, it's sort of just one corner of that cube. These are the missing atoms, so that some have to arrange themselves, but they arrange themselves in such a way that they have three perfect facets and a rounded piece over here, which of course is fluctuating because it must have discussion fluctuations, all right? And now you can see also this line structure, so there's just an enlargement of this piece over here. Here is the facet edge, okay? And then this is sort of the level of the facet, and I'm one level below, there will be never line, next line. So you see it gets sort of like a line ensemble, and this line ensemble has a particular statistics, and if I would understand the statistic of this line ensemble, then I could actually compute what are the statistics of the facet edge, all right? So this is what I'm thinking of, all right? Now, if you have a little bit of a map, excuse me? Yes? Excuse me? Ah, previous slide, yes, I can do that. No, there's one direction. Okay, this is the people. What is, where are you? What is I? This symbol, oh yeah, okay, maybe I'm sorry. Yeah, I should have said that. So this is the edge, and this is simply the edge from the distance. So this coordinate is called I. And what I'm saying is that the way how this enters into the flat part of the function goes with the power which goes R to the three half. Sorry, I mean, yeah, I mean, you're totally right. I mean, I should have done set this more properly, okay? So it's this distance here, right? So if it would be parabolic, this would be R squared, right? And so it goes with the powers we have. Okay, now let's see, where are we? Okay, so now if you have a little bit of imagination, and then you have sort of seen sort of, you know, this relationship between quantum spin chains and 2D statistical mechanics, you can sort of imagine that here I have already the world lines of fermions, and so maybe actually, you know, there is a rather close to connection to my problem to, let's say, interacting fermions in one dimension, okay? So let's try to sort of do this a little bit more precise. So there is a famous model amongst these people who studied crystals, which is called the TLK or the Terras-Satch-Kink model, which sort of, you know, is similar to what I had before, but just want to explain this a little bit more precisely. So you imagine that you have a visceral surface, I mean, that's a surface which is perfectly flat, and now you cut it under a small angle. And as you cut it under a small angle, then typically you will see this, and then you have thermal fluctuations, you will see, you know, the terraces, the ledges, and the kinks. I mean, so here I've just shown a cross cut, which is sort of just the steps here, I mean, the various ledges. Here I see from the top. I mean, so if I see from the top, and then here I have the terraces, they have a certain height level, and they are bordered by the ledges, so this is one ledger over here, that's the next one, that's the next one. And then I have the kinks, which sort of saying is that, you know, the ledges are simply not straight lines, but once in a while they go down, or you sort of go in one direction, and then maybe they go up in another direction. That's the Terras-Satch-Kink model. And now you want to sort of connect this to quantum mechanics. Well, you can do this. I mean, you know, you just sort of think of the transfer matrix operating along this direction, and maybe you write down the XXC model. I mean, so here this is the hopping term, and this would be sort of repulsion of attraction between sort of ledges, neighboring ledges, and here sort of a parameter which controls the ledge density. Anyway, so if you write down this quantum Hamiltonian, and then you look at e to the minus tau h, then you can think of this as, and then you want to look at the propagator between some initial configuration x and some final configuration y. Then you can write this propagator, but sort of, you know, just basically expanding out the h. You can write this propagator as a sum over all weighted world lines. So I will tell something about the weight, but the world lines simply means that these world lines have to start at x, so they're over here, and at time tau the final configuration must be y, and then they must be connected by world lines according to a particular statistic, which of course encodes the properties of that quantum Hamiltonian, or differently, I mean, the quantum Hamiltonian encodes the statistics of those world lines, right? So this is sort of one example of this, well-known map between 1D quantum spin chains and two-dimensional statistical mechanics. It's just one particular instant. All right, and now you have to sort of read off a little bit what these world lines are doing. So the first thing what you observe is that, you know, because of the hopping term, you know, you have sort of like a fermionic constraints, and because of the hopping term, the way how it's defined, I cannot have two lines at the same side. So these lines are constrained not to intersect, right? So it's a line ensemble of non-intersecting lines, which of course is an enormous constraint. I mean, so if I don't put the constraint, then I have just simple random walks. They're just symmetrically jumping up and down, that that's this term over here, and you know, they just have to find their way from x to the corresponding y. But when you read this term carefully, you see that encodes, I mean, this constraint, and that's of course a huge constraint. I mean, it just modifies statistics of the line sort of entirely, right? Because now they cannot intersect. They would sort of violate, you know, the fermionic exclusion if they would intersect. And so then there is some extra weight, which comes from this term, which is diagonal and the sigma x representation. So you see this, I'm working here in the sigma, sorry, I'm working in the sigma c representation, right? I mean, this is the hopping between sigma c's. It's the hopping between the occupations in the fermionic language or in the lattice gas language. And so in the sigma c, this is just a multiplication. And so, you know, if I put this weight in the exponential, then you see that what I get is I have to sort of sum over all sides. I have to sum over all lines. There's a sort of nearest neighbor interaction between lines and, you know, it will be a positive delta. It will be a weight delta. And there's something similar for the corresponding weight for the lines, which corresponds to the magnetic field. And so you see that if I put delta larger than zero, I mean, then, you know, sort of, this heavily favors that lines are sitting together. If I put delta less than zero, it will be repulsive, so the lines want to sit apart. And the mu, which I have, controls the ledge density, okay? Yes, what? Oh, sorry, this was in the previous slide. I mean, so what I sort of slightly skipped over that. I mean, so, in order to have a simple notation, I call eta m d m's line. I mean, so this just means, you know, it's a line which is sort of one along this blue line and this zero otherwise, right? I mean, it's just sort of, it's just an indication of one of those world lines of the fermions, okay? So sum over m simply means that I'm summing over all possible lines in this line ensemble. Yeah, it's always in the basis where sigma z is diagonal. I mean, so you see that it means that, you know, if I look at the local thing that, you know, if this is, let's say, positive, then these two lines sort of like to stay together. And how much they stay together depends on how much time they actually stay together. You see, I mean, as soon as they sort of separate, I mean, then this term is simply turned off. So it just measures how long they stay together and then if I give an extra weight to them, that means that, you know, I'm sort of favoring sort of line sitting together. More questions, okay, good, yes? No, no, further, the domain balls are still coming. I mean, this just represents simply the fermionic world lines in the Euclidean time, right? The hopping, that's right. They just represent the hopping, right? And the age tells me, you know, what kind of weight I should choose, all right? So let's see whether I can do this. So I put here this already. And now let's see how we can produce a facet. You see, I just have told you this E to the minus CH and I can think of this as sort of a classical partition function with fixed boundary condition. Now, you see, in order to, if I just prepare an equilibrium state, let's say a thermal state of my X, X, Y model or maybe a ground state, then it will be spatially homogeneous. And if it's spatially homogeneous, I mean, I will not see any facets. So if I want to see facets, I have to impose some constraints. And the simplest constraints, what you do is, I mean, that's just like in the experiment, is that, you know, or also in the numerical experiment, I mean, you know, you have a fixed number of atoms. You know, I put here my droplet on my substrate and this has a fixed number of atoms. So in the icing model, I remove 10,000 atoms from my icing lattice gas. I mean, that fixes sort of the number of atoms. This, of course, produces for your constraint. I mean, you know, I put it up there as a formula, but for what I'm going to do, this is sort of, I mean, I can do this also, but it's not so well understood. So I'm imposing the constraint by something else which sort of was mentioned already, namely by something which is called the domain wall boundary condition. So let me sort of explain this carefully. I mean, so the domain wall boundary condition basically means that, that I'm starting, so you see, this is your quantum state, a time t equals zero, and now it's evolving in Euclidean time. So a time t equals zero, I simply impose a domain wall state. This means that my left lattice, I mean, the way going to minus infinity is simply occupied in every side. So occupation is always in terms of the sigma c basis, right? I mean, it should sort of spin up if you want, so it's been down depending what your convention is, okay? Whereas the other half lattice is simply empty in my language, but you can also think of spin minus one, right? I mean, so this is the domain wall boundary condition, and now I impose exactly the same domain wall boundary condition at the final time, okay? So these lines sort of start off with a perfectly, you know, with a perfect domain wall, then they have some freedom to move, but then they are constrained to actually end up again in the domain wall at time t. And now the question is what am I going to see? Well, you see, I mean, if I'm down here, I mean, you know, everything is constrained. I mean, these poor lines, I mean, they can nothing else do than go from here to here. I mean, there's so much constraint that, you know, it's just no way. And also up here, they have no lines at all. I mean, so clearly out here, I will see the facet. But if I look here in the middle, for instance, look at this top line, you see, it's constrained by the lower one, but you know, it can still move up. So there's something which people call entropic repulsion. Entropic repulsion just means that geometrically, you know, there's not enough space, and so it has to move up. And so the question is, how much does it move up? Well, it roughly moves up exactly on the size of that system, I mean, just by order tall, okay? And so if I'm now going to look at the very large scale, what you're going to see is sort of roughly a shape which looks like this. Out here, I have the perfect facet. So these are just going straight from here to here. Here, it's all sort of all empty. That's the second facet. And in between, I have sort of a disordered region, you know, where they sort of try to interpolate between two things. And then of course in here, I will, you know, if I really would analyze this more carefully, I would see the Gaussian fluctuations as promised. Okay, so that's what I'm going to see on the macroscopic scale. Okay, now, what is the facet edge in this case? Well, I mean, it's easy. It's just the top line, right? I mean, so, you know, by definition, by construction, it's, of course, there's a mirror image which sort of, you know, anyways. So it's just the top line and which borders in the facet zero. Of course, I could make the same construction for the other facet, but let's concentrate on this one, okay? And so now you can sort of say this in a more clear, sort of more mathematical question. I mean, you know, I have this sort of, this line ensemble with the given rules and now I would like to understand the statistics of this top line. All right? Now, we know one thing. This is Prokofsky-Talapov. You see, Prokofsky-Talapov was telling you that there is an R2-3 half power law. And, but now here we're looking at the line density. So, I have to take the derivative so it gives you a square root. And so what Prokofsky-Talapov says, well, out here, you know, I know. I mean, there I just have density one and out here I have density zero. I'm plotting simply the line density and let's say along this line over here, okay? So here it's one and here it's zero. Now, Prokofsky-Talapov tells you that, you know, that it goes actually with the square root and then you sort of another square root behavior over here. This is what Prokofsky-Talapov say. So it doesn't go as smoothly as you might think. I mean, you know, sort of starts off with a very sort of infinite slope. Okay, now here comes the exercise for the students. I mean, which I think is actually very, very informative and you should try to do this. Here's the problem. I mean, take your standard lattice re-fermions, the one which you know from your lectures, okay? Just an average lattice size you might have a fermion and you want to look at the crown state of this. So with the hopping term and you want to look at the crown state of these three fermions. Now this, of course, you know, I mean, that's just sort of filled firm disease of this, you know, very well. But now what I want to do is I want to put this three fermions in a linear potential. Okay, so if I do the classical thing, I mean, you know, that's just a barometric formula. But now I want to put at zero temperature the fermions in an external linear potential. That's a very good exercise. The result of this exercise is that here you will see this sort of like part of the cosine. I mean, you will see this particular formula for the average density, right? Okay, let's assume that we have done this exercise and now let me explain you a little bit what the miracle is. So let me first sort of say one more word on this. You see, what we are doing here is, so the domain wall state is, you know, all zero to negative side and one to the right. And the partition function you can write as psi e to the minus tau h, h is the x, c Hamiltonian and psi again. Now when you look at this object, I mean, you know, this should remind you of a survival property if you do this in real time. So rather than doing the Euclidean time, you could sort of imagine that you replace this here by it and then, you know, what you will find is sort of you start with the domain wall and you're asking yourself what is the probability that at time t I will see again the domain wall which is sort of an analytic continuation of this quantity. Now this is something which has been started easily by Jean-Marie Stéphane and I'm not going to go into this. I just want to sort of point out that there are some relations to time here. Anyway, so here I want to sort of look at this problem and now you want to study this and let's look at the simplest case. I mean, you know, this is just putting delta equal to zero. Then I have non-interacting ferments and you would say, oh, you know, but this is something I can solve, right? Now remember that it's okay, but you see it's not completely trivial because now we don't have any translation invariance. There's no translation invariant in X. I mean, if you go back to this picture, I mean, you see, I mean, you know, I have a macroscopic profile. So there's no translation invariant in X. I mean, there's also no translation invariant in T. I mean, you just look at the picture, right? You know, it's some profile which somehow, you know, it decides there must be, the boundary condition, it must have sort of this domain wall but in the middle it does something. So I have to do free fermions in a somewhat unusual situation. Usually you see free fermions, okay, well, just take four-year transfer and you compute the most and then, you know, I somehow managed to get the result but here you cannot do that anymore. And so this is why it's not such a simple problem. In fact, it's a difficult problem. I mean, if I just sort of tell you, I mean, you wouldn't know. Anyway, my claim is that this facet edge is related to the KPC equation and this is sort of what I want to explain you next. And this is the miracle which happens but it happens only at delta equal to zero. So this is at the free fermion point of your XY model. So it's a real miracle. I mean, you know, if I just put you, I mean, you wouldn't see it. And but, you know, once you explain, I think you can sort of understand what the explanation is. I mean, to actually really establish it, you know, requires more than I can do doing such a lecture but I just sort of wanted to give you a, you know, why to the heck should this line have anything to do with the KPC equation? So here comes the connection. Okay, now, what is sort of the somewhat surprising thing? The somewhat surprising thing is that if I'm only interested in the statistics of the top line at some given time T, well, I mean, you know, what I would have to do, if I want to do a numerical simulation, I just have to do a Monte Carlo and produce sort of this equilibrium sort of state and then sample and then I get the statistics. However, at delta equal to silver, there's another way of doing it, namely rather than sort of doing the Monte Carlo, what I'm doing is I'm simulating a classical particle system which I'm going to explain to you, which is this polar nuclear cross. And that particular model is manufactured in such a way that it exactly reproduces the statistics of the top line. Okay, so I tell you first, you know, this corresponding cross model and once I have that, I should sort of state that the things are the same. So let me show you what, I mean, this cross model is sort of quite fun by itself. I mean, so let me, of course, it's a classical model, right? I mean, so it's called Polinocular Cross for reasons which you will see in a second and has been invented quite some time ago. And so here we're looking at this. So here's a sort of very simple dynamic. So imagine that you have, so okay, so you have a height function. So this height function is sort of let's say over the V line, but the height differences are always plus or minus one. So, you know, if the height goes up, I mean, this is sort of either by plus one or it goes down by minus one. And now I want to tell you how such functions are evolving in time and there are basically three rules if you want to, yeah, two rules subsets, okay? So what I'm saying is that if somewhere in this height function you see a down kink, which is this one here, then this will move with velocity one. If you see an up kink, then it will move with velocity minus one. And, you know, now of course can happen that, you know, I have sort of a down and up kink next to each other so that they sort of go onto collision. If they coalesce, I mean, then they simply disappear. I mean, so they simply annihilate each other. And so you see this one goes over just to something which is completely flat. Now, if I only would do three rules and if I start with some rough surface, if I wait long enough, of course, you know, eventually it will coalesce and it will just end up with some completely sort of uninstructing flat profile. But now the system is excited and the system excited is by nucleating. I mean, this is sort of where the growth comes from. Nucleating at random space time points, a pair of up and down kink. And once I have created it, of course, by these dynamical rules, they will just separate. So let me just repeat this construction in space time because then you can sort of another way of looking at it. It's sort of quite simple. I mean, so here is space time. I mean, here's the light cone. Sort of, this is the plus and minus one. And you have these nucleation events. And here, you know, I wanted to produce something which has this curved shape and therefore the nucleation went is only in the forward light cone. So I have this forward light cone and then I make this red dots. I mean, these are the space time points of nucleation. They are completely random. They are just presently distributed with some uniform density, let's say two or one, and let's say two. And so these are these red dots. Now here, I have a nucleation event. And so at the time of silver, I sort of, you know, it just sort of does nothing. And then here I have a first nucleation event and so I create sort of like, you know, such kind of a height. And then it, you know, the left and the right edge are moving outwards. And now it might happen that there's another nucleation event over here, but here they coalesce, et cetera. And then from the picture, I can read off what the height profile is. So here it would be zero. And then of course, you know, there's the first level line. So it could jump from zero to one. And here then it goes from one to two. And if I want to know what is the height at a given time, I have to take this cross cut and read off what is the height profile, right? I mean, so it's simple model of charges, if you want, so which have plus and minus velocities which are created in pairs and which are annihilated in pairs. All right. And now we're not declaiming. Some are surprising result is that, if I look at the so-created height profile, it will completely agree with the top line in the domain wall crystal shape problem, the one which I explained to you before. So now you can argue on the basis of universality and say, I mean, it's sort of a growing object. And this is what the Kader-Parisi-Sange equation was made for. And so why, rather than trying to say something about this polar nuclear growth model, why not just sort of invoke universality and just try to solve the Kader-Parisi-Sange equation directly. So this is what I wanted to show you in this slide here. So you see, we want to get sort of like a curve profile. So actually what we will do is we will get a profile which looks like x squared over t. So it's like a parabola, maybe with a minus sign, actually. And then there will be a little bit shifted. So it's just this function over here. So there will be some minus one over 24 times t, actually. So in order to do this, I mean, I have to start the KPC equation with a very special initial condition, which is the sharp wedge initial condition. So this is sort of a very narrow needle. And this needle sort of immediately sped into parabola, which sort of looks like this, and it shifted down. But this is typically non-universal things. So I don't care. I just sort of concentrate my attention to this thing, which sort of has already sort of roughly this nice rounded shape. But now of course because I'm solving a stochastic equation, then actually on top of this, I will see sort of little fluctuations. And if I want to figure out what they are, they are typically of size t to the one-third, and they are correlated over times t to the two-thirds. That's sort of already what Carter Parisi Sanger already claimed, okay? But now, what's somewhat unexpected, I mean, we can sort of deduce from the PNG, but in fact we can also do it directly on the level of the KPC equation. There are certain quantities which you can compute. And in fact, you can write down an explicit formula for the height at the origin. So I'm sitting here at x equal to zero. I'm just acting as myself. How is this height fluctuating in the course of time? And for this, you can actually write down a Frithold determinant. So it's still a somewhat implicit expression, but this Frithold, it's an infinite dimensional matrix and you have to compute the determinant of some infinite dimensional matrix, which typically is not such an easy thing. But in this case, I mean, you can sort of do approximation and eventually if you want to produce numerics, what I have to do for every data point, I basically have to compute a determinant of something like a 40 by 40 matrix. And what you see is the following picture. I mean, so here you see it the very early times. So this is sort of like a Gaussian peak. I mean, this is sort of, when it just starts spreading, I mean, then it's just a noise and then the noise sort of is Gaussian and therefore you see also a Gaussian profile. But then if you wait, I mean, then the nonlinearity will take over and you will produce this T to the one third scaling, which is already sort of what is sort of, when you do the plot, I mean, this is already used. I mean, this is why the initial profile looks so white. And if I wait sufficiently long, you see that I do a little bit shoot over and then eventually I sort of settle down to a curve, which maybe cannot be seen so easily. It's this blue curve here, right? I mean, so the green curves are always in between and, well, anyway, it was just the way how we did the picture, but eventually you will settle sort of at the one which you see here at the very, very end, right? Okay, so this is the limiting distribution and now you have a perfectly nice prediction, namely, if I'm going to look at the statistic of this facet edge and if I make the facet sufficiently large and if I do some statistics, then what I'm going to see, if I just look at the facet edge statistics at some given sort of ray, then what you should see in the long-time limit, you should see exactly this curve over here. Okay, so that's fine. So we have an exact curve, which maybe you can compare with experiments. Actually, that's not easy. I mean, you know, even to see the Prokofsky-Talapov law is something which in Yulich, they worked for many, many years and even now I would say, you know, they see it, but you know, it's not, you know, with the position you would maybe like. I mean, so it's not so simple. Anyway, yes, yes. Excuse me? Should I go back or? Yeah, so what if in the, okay, okay, okay, okay. No, no, you see, I mean, so, I mean, okay, so maybe I do this picture, right? I mean, so here is my P and sheet problem, right? Look, look, something like this, man. And I'm sitting at X equal to silver and then some later time I see something like this, no? And so if I'm sitting here, the domain term, which of course is not shown in this picture anymore, it's simply a linear displacement, right? And so this is a function of time. So what is said is that, you know, relative to the linear displacement, which is not shown in the picture, at early times I will be see essentially sort of, you know, at that point, you know, if I look at these fluctuations, I will see essentially Gaussian distribution, that's this blue curve. And then if I wait long enough, I mean, then I will see the nonlinearity of the equation and then the Gaussian distribution will be transformed into something sort of more complicated and the more complicated in the course of time it will actually settle down to something which is universal. Universal in the sense that, you know, if I compute it on the level of the KPC equation or if I compute it on the level of this discrete gross model, I will see exactly the same thing, no? I'm sure I'm asking you a question or semi-answer. No, no, no, no, no, no, no, no. You have to put your head like this. I mean, you see the distribution is this one here, okay? And so what you see, I mean, what you should ask is where does the asymmetry come from, right? Okay, so the asymmetry, yeah, that's a very good question, what the asymmetry is coming from. Now, the asymmetry comes from the fact that in this direction, it can sort of move fairly freely, but in the other direction, it will see the other lines below and therefore it's constrained. Okay, now let's see what we have here. Okay, so, yes, okay, so now we have this nice, nice distribution and now it's another surprise in this KPC story, at least for the people who sort of follow this is that, you know, this was sort of obtained after somewhat, you know, so I'm not telling the precise history, but what is certainly correct, it was sort of obtained through some fairly lengthy sort of mathematical argument and at the end, I mean, you found this particular distribution with a very explicit formula which I have not shown you, but then people looked at this and say, are you know this distribution? We have seen it already. Now I tell you where we have seen it already, okay? So the way, how this distribution, so this distribution was actually discovered by Tracy and Wittem, I mean, and therefore it's called the Tracy Wittem distribution and what they were interested in was sort of a totally different question, namely, they were looking at the so-called GUE ensemble, so this is not GUE. So you take an N by N matrix, which I call A, and you want to have that this is equal, so it's complex emission, okay? And then now you look at random instruments, you take some Gaussian distribution trace and then here you just put the A squared or A star A, I mean, or anyway, so I put the A squared, right? And then you normalize this, the A, and you want to look at this random matrix, okay? So now this is Wigner and then so if you, if I look at the density of states, what you find is sort of the famous Wigner semi-circle law. So these are the energy levels, right? If you want so. And if I put the N correctly, I mean, the edge will be at two N. So I have N eigenvalues, they are sitting somewhere over here and I do a normalization such a way that the distance between neighboring eigenvalues would be of order one and so they are spread over sort of, and then this would be two N, okay? Well, now you have a largest eigenvalue somewhere. In fact, the largest eigenvalue, this is this asymmetry which I showed you is a little bit below two N, I mean, so here is maybe the largest eigenvalue and then of course there are other eigenvalues. And what Tracy and Wittem ask is, what is the distribution for large N of the largest eigenvalue? And it worked hard and they found a particular formula and what we know is that the same formula which we compute on the basis of the KPC equation has exactly this distribution. So the one which I showed you before, of course, the one which is at the very end of this time sequence can be obtained directly by or is compared directly to the one which you get from the GUE random matrices. But of course it comes out of the close process also. Okay, so maybe let me look not at this refinement and let me show you also an experiment so that you see that, you know, these things are not totally disconnected from sort of experimental investigation. So this is a famous experiment by Takeuchi and Sano. Actually it was, I mean, Takeuchi is a young person. It was his PhD thesis. And he had sort of, he just wanted to do it and what he discovered is also he looked up all this sort of literature about liquid crystals and he wanted to find a system which has a nice meta-stable phase, okay? So in this, you know, there are many substances and then you have phase diagrams all kind of thing. Anyway, so he found one very particular one and it's very beautiful. I mean, so you can just see this under transmission of light. So you have a little probe, I mean, sort of like a centimeter by a centimeter. You put it under a light microscope and you can see, you know, what is basically on the computer. So you prepare the system in a meta-stable state. So if you watch, I mean, it doesn't change. The only thing what happens is that once in a while at the end of your, at the border of your probe, I mean, you create a nucleus and then sort of, you know, the front is advancing becomes immediately the stable one. But this is sort of, it doesn't happen too often. I mean, if it happens, I mean, then he would disregard this particular sample, okay? So here you have your probe, nicely prepared in the meta-stable state. You shine a laser, so you make a little point in the middle. I mean, sharp laser and it will create a nucleus of the stable phase. And then you see this nucleus growing and what you see, you know, this is an experimental picture. So what you see on your computer screen is basically this picture. I mean, the one which doesn't transmit light, this is the stable phase and it's still growing outwards into the meta-stable one. And then you can do the statistics. I mean, you sort of, you know, this is somewhere at the center and then you ask what is the radial statistics, you know, of this little sort of rough droplet. And then you can do samples and you see that he goes, this is on a logarithmic scale. I mean, you see it goes down to 10 to the minus four. So he has something like 10 to the four samples and he simply plots these things, okay? And what you find is that you put this logarithmic scale and then you compare this with tracy-vitam, which is this dotted line here. This is a tracy-vitam distribution and these things are, forget about the GOE. These are the experimental data and what you see with your naked eye is that it's not really quite the tracy-vitam but what's happening is that it's still a little bit shifted because the mean is sort of the slowest degree of freedom which sort of you need in order to get the universal distribution. And then, you know, if you could do the experiment a little bit longer, which he has done now, then it would actually be a perfect agreement with the prediction of the KPC equation. So it really works, okay? Now there's one thing which sort of, it's something for the experts, I mean, well, not really experts, it's one point about a piece of general education. This is what I wanted to say. The piece of general education is that the mean is universal. Everybody who sort of has ever thought about any problem in statistical mechanics with some universal properties should be surprised. The mean is non-universal and the experts first couldn't believe that but it's very crucial actually. Anyway, so it's not important. So now let me sort of wind up a little bit with this particular thing. I mean, so I guess I told you already the main part of the message but now the last part is sort of, what is sort of missing in this story, right? I mean, you know, every story, you know, should also have some something sort of, you know, looking into the future. So now the first question you could ask, what happens if I vary the delta? You see, the computation was exactly at delta equal to zero. What happens? Now you're stuck. I mean, all these messages, I mean, it's just no idea what to do, all right? So they are sort of strong guys and at least sort of one thing what they could obtain. I mean, this is work of Colombo and Bronco. I mean, they have actually managed to tell you what is the shape of the facet, what is the facet boundary? I mean, so let me maybe go back. I mean, otherwise, you will not understand what I'm saying. You see, it's this circle here. So if I'm at delta equal to zero, the facet edges happens to be exactly a circle. But if I now turn on the delta, I mean, then, you know, who knows where it goes? But, I mean, Philippo and Bronco, I mean, they have a formula for the facet edge, okay? So you know a little bit. Now you ask themselves, can they actually show once they have the facet edge? Maybe they can show Pukovsky-Talapov. That's the out of the three have. We don't know. I mean, it's just not know. Can they show KPC? We don't know. So at the moment, I mean, since I was in Florence and then they challenged me and said, I mean, you know, do you really believe that? You know, I believe immediately that if I put delta different from silver, well, you know, I mean, of course it must have been the same, right? I mean, it would be very surprising from a statistical point of, mechanics point of view if, you know, suddenly, you know, changes dramatically. I mean, you know, there's a little bit repulsion or attraction of, of course, it will not change the universal properties. So they said, well, you know, but how do you know? Okay, well, I don't know. But then now, you know, we find this is actually an interesting challenge. And then we are doing at the moment, Monte Carlo simulations at delta equal and half. This is so-called alternating sine matrix point which has particular sort of, from a numerical point of view, nice properties. And it's essentially finished, but I don't want to show you any picture, but I mean, it is confirmed. I mean, you know, what we see in our numerical simulations, they are a little bit demanding for various reasons, but in any case, because it's a critical system and therefore it's not so easy to do, but we do see the KPC behavior. So that sort of looks okay. Another thing which you can look at is delta larger than one. I mean, that's sort of where they sort of really want to sit next to each other. And then suddenly, this whole nice facet disappears completely and you see just sort of a sharp edge, right? I mean, so it's like having two phases of a ferro magnet and they are simply touching each other. So one is of the plus phase, the other one is the minus phase. And this is related to the fact that the x-axis chain has a spectral gap, exactly which vanishes exactly at delta equal one. So as I go from larger delta towards delta equal one, this interface will of course have fluctuations of order one, but you know, they start to become larger and larger and so it's another critical phenomena which you can analyze, I mean, using this sort of KPC technology. And in fact, at criticality, you find this square root of T log T fluctuations or torque fluctuations and they are actually non-Gaussian. I mean, so that's something which you can do. But there's another thing which is missing. There's another thing which is missing in this picture, namely, you see, why to the heck should I have the domain ball on the other side exactly at the origin? You see, I mean, I might just simply shift, right? I mean, so why don't I do the following? There's the origin and now I fill it up, so here they are. And now I put the other point over here and then I'm filling up, right? So now this step is at this point and of course, you know, if this is tall, I mean, to have something macroscopic, I mean, I would make this distance also tall, right? What happens then? Well, you know, if you think in terms of surfaces, I mean, that's the most natural thing to do because, you know, after all, I have a surface in 2D and it has two slopes. I mean, it has two, I mean, gradient is a two vector, right? I mean, can slope either this way or can slope the other one. And of course, you know, if I look at the profile in between, even if I put the boundary condition over here somewhere else, I will see sort of a non-trivial slope. But now I can just enforce such a slope by shifting this, okay? We don't know how to do that. I mean, there's simply no result, but it's a very natural question. Let me, okay, so now let's, so let's, okay, I sort of went a little bit too quickly. I mean, before doing this, maybe you just want to keep that symmetry and you want to go to delta less than minus one. Okay, now there's another interesting phenomenon which appears that as you are delta less than one, you see, I mean, you have this outer edge here. I mean, so this is of course what Colombo and Bronco computed. But now, you see, I mean, these things, I mean, want to really repel. I mean, you know, they just don't like to sit next to each other. I mean, how does the system respond to this? Well, I mean, the answer is that in the middle, it generates another little facet over here which somehow, you know, takes up all that tension. And so if you look in this facet, so it's, again, it's essentially flat, but it has little sort of statistical errors. Actually, it's not like up here where you have perfect facets. It's really sort of a little bit, you know, they are fluctuations. And here you see the antiferromagnetic state here. So here you see alternating empty occupied, empty by occupied. So this is how the system sort of reacts to sort of, you know, enforcing a very, very strong repulsion. It doesn't break up the whole thing. No, no, out there, you know, I mean, you have, you know, the boundary condition which forces you to make the perfect facet. But then, you know, the only place where you can still do something is in the middle of the bounded piece. And there you create another facet, okay? Now, can we compute, for instance, the shape of that facet? Okay, so there is work. I mean, I don't want to go into detail. I just want to sort of emphasize that, you know, while you might think I'm telling you something which sort of has been completed, you know, I mean, if you really think about it, I mean, there are lots of very natural open questions. So what about this slope? Well, you see, I mean, I can impose the slope by, you know, for instance, shifting at this, but then you might ask, well, maybe, you know, it's sort of like equivalence of ensembles. Maybe I can encode the slope directly into my quantum Hamiltonian. What does the slope mean? Well, the slope means that, you know, you basically enforce that these lines, rather than sort of making symmetric random walks, I mean, you know, they have to go up. I mean, so they will have some average drift. I mean, that's sort of what I'm enforcing. Can I put this directly into my quantum Hamiltonian? Yes, of course I can. Well, what do I have to do? Well, here, I've written down again this quantum Hamiltonian. Here, I have the hopping term, and then, of course, you're very much used to the fact that you want to make it symmetric. I mean, this is what you're told by your teachers, right? Symmetric Hamiltonian, so therefore you have a symmetric hopping, okay? But now, in this statistical mechanics problem, you know, if I want to sort of impose a net drift, I mean, I should rather make an asymmetric hopping. So I put here an E to the theta, and here an E to the minus theta. Why I do this in this particular way? That's a different story, but the main point, what you should see is that, you know, I mean, the right left and the left right hops sort of have a different weight. Okay, and this is something which is called the asymmetric HXXC model, and I have thrown out quantum mechanics out of the door. Why? Because it's no longer a symmetric operator, right? You know, I mean, if I take the edge of this one, I will get this one, but this will be the one coefficient, unsymmetric. This is why it's called asymmetric. Now you think a little bit about what you learned in quantum mechanics. I mean, you know, you have an XXY chain. I mean, you can think of putting this on a ring, and then you put a magnetic flux line through, and then you know that you will get sort of something like this. However, this will be self-adjoint. So if I now make an analytic continuation in theta, then I put an E to the i-theta, and here an E to the minus i-theta, then I'm back in business. That's a self-adjoint operator and corresponds physically to putting this flux line through your ring. So if you want to, I mean, I could study the magnetic problem and then do the analytic continuation, which of course would be a completely crazy idea. I mean, this will never work, right? But I just want to say that, you know, I mean, the disconnection between, you know, 1D quantum systems and 2D statistical mechanics is sort of somewhat more richer than sort of what I sort of indicated at the beginning. Okay, so I think this sort of brings me back to the conclusions. Let me see, yeah, okay. So this is a summary of the first connection. And the idea was, you know, we make a short break and then I continue with something sort of, I mean, not a orthogonal, but somewhat separate. And I just want to emphasize again, what I was doing here is I was using the connection between statistical mechanics of 2D line ensembles and S generated by a quantum spin chain. And the object, which we looked at was something very, very, very specific, namely a facet in the statistical mechanics model and the fluctuations of the facet edge. And the conclusion was that the statistic of the facet edge are governed by an equation which was written down for completely different purposes, but which happens to model also these very particular fluctuations. So if there are questions, thank you very much. Yes? X, X? Yes, yes, X, XE, yes? I don't know often. I would have to, okay. I think I cannot really immediately answer that. I would have to do a little bit of thinking. But if you catch me, I mean, maybe I can answer. So I mean, I guess we mostly investigated that very particular connection. But it could very well be that, you know, if you sort of look at the somewhat, somewhat more general model, I mean. But you see you would have, I mean, so you always, I mean, in order to get the connection, you somehow have to connect it to some sort of facet edge, right? I mean, you know, there's no connection if you look at any sort of bulk property or so. I mean, this just sort of doesn't make any sense. I mean, the whole point of this lecture is that if you somehow, you know, in this model, it's sort of a statistical mechanics version if you sort of create something which has a facet edge, I mean, then you're in business, right? So you first have to create a facet edge. I mean, yes. Let's go back to this. Well, can you, I mean, can you get rid of the magnetic field? Well, it's sort of magnetic flux. I mean, you know, things will be, you know, there will be a carbon which is in use in the XXC. So I don't think, no, I don't think you can get rid of the way. You see, it has priority boundary condition. I mean, so it depends on the boundary condition. If you make other boundary conditions, you might, but in this particular case, I don't think you can get rid of the way. Well, okay, so let's make a little break and continue then. I also have to sort of condense back. There are reasons because now people that study dynamics in what are called random circuit models. That's correct. I know, yes. Yeah, that's another, but I'm not so much, I mean, this is something which, you know, I haven't really worked myself, so, you know, I would not be able to. So recently, that's why- No, but you've mentioned it in the school before. Oh, okay, yeah, yeah. So in this sense, they have additional connections. No, but they are, the point is that, you know, this entanglement entropy, sort of, I mean, that's the main point. I mean, that they establish that the entanglement entropy sort of satisfies sort of like both type dynamics. That's correct, yeah. And then, I mean, once you're at that stage, I mean, sort of more or less, obviously, it should be okay. Oh, okay. Good, good, good, good. Walkers, they need at least three. Okay. No, they must have been coming. Start the second part of- All right. Okay, so this now is, as I said already, I mean, this will be a totally different talk. So, in case, I mean, you know, the first part was slightly hard to understand. I mean, this is totally our slogan line. Of course, you know, what is kept is the connection to the KPC, but otherwise it's sort of talking about very different kind of physics. Okay, so here I'm now really starting with quantum spin chains, and sort of from a statistical mechanics point of view, I mean, there's a sort of very particular way of probing the dynamics, and these are sort of, you know, time properties very close to global thermal equilibrium, and the usual terminology for these objects are equilibrium time correlations. In our case, let's say for quantum spin chains, okay? So, let me, of course, you know, some of you know, but just sort of for the sake of being on the same level, let me sort of just sort of talk a little bit about equilibrium time correlations in general, and then come to the more specific kind of things which I would like to explain. Okay, so I'm going to look at the generic spin chain, which is translation, and there's no disorder. I mean, I'm not looking at many body localization or anything of that. I mean, so it's just sort of one of the standard spin chains, and the crucial point of the spin change is that when you look at the energy, you know, there is an energy density, and of course, which is here, the little h sub j, and of course, the h is simply the full, Hamiltonian is simply the sum over the local density, and of course, the main also important point is that it depends on the model, but what I want to do here is this density is actually local, maybe it could be quasi local, but I mean, let's say it's local. So let's just look at the xxc chain to have an example. So I first, you know, look at the operator h at lattice side zero, which is then simply this nearest neighbor coupling, so the sigma g sigma one, and the cc component, so that's a Hamiltonian which I've written down before, and if you now want to write the full Hamiltonian, then I should take this operator and simply shift it. I mean, if I want to have this density, I simply shift this by j. I mean, this is what is meant by hj. It's the same operator. I mean, h at zero, h of j is the same operator, just shift it by j lattice sides, and then, you know, if you shift it, I mean, then this index becomes what, j and this j plus one, and if you sum, then you see the same operator as I told you before. Okay, so these are the kind of things which you want to know, and then, of course, there's a thermal equilibrium state which is given by e to the minus beta h, and the average is usually noted by beta, and I sort of imagine that the system is very large, so since we are in one dimension, you know, you can sort of easily think about infant dimension, I mean, that's sort of not very important. So now I want to look at some local observable, so again, I mean, local just means that, you know, it depends on a few, few lattices sort of close to the point of reference, let's say here's zero, and I have to translate hj, the same operator shifted by j, and now I want to look at the time correlation, so this is sort of the usual definition, I mean, so you might take the Kubo product, but let me not make things unnecessarily complicated, so here I look at this operator at lattice side, j, shifted to lattice side, j time t, and the same one at zero, and I take thermal average, that's what's called equilibrium, it's time because I do the time-displaced correlation, and then I put the c because I subtract the spatial average, and so that's c sense for connected, okay? Now I just remind you that there's a leap Robinson bond which tells you that, you know, if I think, so now I have just, I mean, once I've fixed all these things, I just have a function of j and t, and the leap Robinson bond tells you that if I fix time t, and if I make j large, then eventually, you know, that this object will become actually sort of uncorrelated, but the way how it becomes uncorrelated is that it's sort of linearly in time, so there's here sort of this light cone which is bounded by the leap Robinson velocity, and if I'm outside that light cone, I mean, so you know, of course that velocity is model dependent, if I'm outside then sort of these correlations are essentially zero, okay? So that's a very, very generic structure, okay? Now, there's another way of looking at these things, which is perfectly equivalent. I mean, you know, I can sort of put this operator a sort of into my initial state. I mean, maybe I want to make it a real density matrix, so I put your eight to one half, and you're also eight to the one half, and then I can think of the same, physically the same thing is is I have my system prepared in equilibrium, I put up somewhere close to the origin, and I'm asking myself, how is this perturbation moving in time, and what leap Robinson tells you is that, you know, I can sort of grow it at most linearly. I mean, sort of maybe less, but certainly at most linearly, right? Now, of course, there's still a lot of arbitrariness in terms of what kind of operator I should do, and so for the purpose of this talk, I want to sort of restrict myself to operators which come from local conservation laws, so this means that if I'm thinking of the energy, then the energy is a sum of sort of has a density, so these are the HAs, and then I would look at the energy energy, the local energy energy correlation time displays, right? I mean, this is sort of what I want to do. Of course, you can do other things. I'm presumably doing this sort of summer school, I mean, you know, there certainly must have seen this out of time, all the correlation which are sort of commutator squared, and then expectations, of course, there are lots of other things which you could do, but for my purpose, I mean, I just want to do the standard time display thing, okay? Now, if I want to have some sort of theoretical understanding of these things, I mean, it's actually a useful idea to first make a list of all the conserved quantities, I mean, so I call them Q here, very often called your charges, okay, so think of them as charges here, and then Q is sort of the extensive quantity which has a label N, and then of course, you know, it's assumed that they have sort of local densities, and so I can sum them just as for the XY for the energy, I mean, I can sum them like this, okay? And once they are local, and then if I look at the time change of the local objects, which is this one, it must satisfy a conservation law, I mean, that's just sort of the fact that the global quantity commutes with the Hamiltonian, so this thing can be sort of written as a difference, just telling you how much is coming into the side change and it's moving out of the side change, and of course, if I doing the sum over all change, then here I have a telescoping sum and which is equal to zero, right? I mean, so this is sort of well-known, but I just sort of wanted to remind you, of course, the H will be part of the list, I mean, so depending on what model you're looking, H has label two or label four, anyway, I mean, so H is of course part of the list, and I just want to point out that in general, there's no reason why these control shafts should commute amongst themselves, I mean, typically you will have, you know, it depends, I mean, just there's no reason, and of course there are models where they simply don't compute, I mean, if you think of an Heisenberg model and if you think of the three spin components, I mean, they certainly will not commute, right? Okay, so this is sort of general, and now what I want to do is I have sort of two cases, I mean, so one case is, you know, where this list of conservation laws sort of never stops, I mean, basically you have the number of course conservation laws proportional to the size of the system, and therefore this list will go on forever, then you have sort of the very exceptional case of an integrable quantum chain, which I do not want to discuss, I mean, we just had, you know, sort of a four days conference at CISA, and then lots of talk about integrable quantum chains, but here I don't want to discuss them, and it's easy to break, I mean, if you have a quantum, for instance, you take an XXC model would be integral, but if you might want to make it non-indegrable, you put either higher spin or maybe you put an X near its neighbor coupling, it will become immediately non-indegrable, so I just want to look at the non-indegrable case. And now comes sort of a real middle, I mean, which, you know, and to me it's sort of a basic question, but because it's such an obvious question, and it's so obviously difficult that, you know, one that doesn't know what to do, so empirically, you know, you study all kind of various sort of one-dimensional quantum spin chains, but you discover that the number of conserved quantities sort of goes maybe, you know, spin and energy or something like this, it goes to one, two, three, four, and then it stops. So you will never, you know, you will never find the natural, of course you can sort of somehow artificially construct, but if I take a truly interacting model, you will never find anything which has 57 conservation laws. So either it's very few or it's infinity. I mean, why? I mean, so maybe, anyway, so there's this, the dichotomy, I mean, that, you know, either you have four small and it's non-indegrable or it's immediately integrable. Anyway, so I'm assuming that the N-out is fixed and small and typically you should think of two or three, maybe of four, I mean, sort of a small, okay? So once you have this, then of course you have to, if you have several conservation laws, you have to sort of expand a little bit your equilibrium states because, you know, then there will be chemical potentials and so you will get an equilibrium state which sort of has this particular structure, right? Okay, so let me repeat the question. So I'm fixing the parameter of my equilibrium state. So this means I'm fixing the chemical potentials. In this equilibrium state, I want to look at time-displaced correlations. So since it's a correlator and I have N-out conserved quantities, this will be an N-out by N-out matrix as a function of space-time and I just want to know how it depends on space and time. That's it, okay? So rather simple and straightforward question. And but maybe it's not so totally easy to answer and so in order to build up some intuition and to make connection with the KPC equation, let me go back to a case which is sort of much easier and much better understood, namely to a classical chain. And so for a little while, I will just talk about the classical chain which you have seen and then anyway, so it will be quite obvious, okay? So here's my classical chain. I mean, so it's called an anharmonic chain. I mean, so now I have my label and at every label point I have a position and momentum. So I'm really thinking sort of as a lattice field theory and then there's an energy which of course the kinetic energy and then there's an interaction energy which I assume to be of difference of the positions. I mean, this is some nonlinear potential, right? And so they are a famous example. I mean, the FPU chain is of this form. I mean, then you have a quartic polynomial as your interaction. I mean, you can take hardcore which sort of a silver outside diameter A and then infinity otherwise. Now then this model of course is integrable because in a one-dimensional collision, I mean, it's just sort of like particles. If you're relabeled, particles are just going to each other. So if I want to make this model non-indicable, I have to break, oops, I have to break the integrability and the way how this is usually done is that you just put alternating masses. So have mass one, mass three, mass one, mass three and this immediately breaks integrability, right? I mean, so this is another standard model. I mean, the total lattice is integrable. Anyway, so this is what I can do. Now I want to follow my prescription. First, list the conserved quantities. I'm always simply assuming that it's not integrable. Okay, well, so the first conserved quantity is actually the stretch, which is sort of the volume between neighboring particles. So it's just a positional difference. I mean, then momentum is the conserved quantity and the local energy. So the local energy because it's written like this, you see it's also written in terms of the stretch and so you can think of your evolution equation actually in terms of the stretches of the momentum and the end of the momentum. It's slightly unfamiliar, but it's a convenient way to look at it. Well, so let's first do the equilibrium. Well, I take e to the minus beta h. You see, it's sort of essentially a product here. I put already the rj and so if I just rewrite this thing in the exponential, then you see that I have the kinetic energy which might be displaced, so I have a mean velocity. So that's the replacing this object. Then of course it's the v of rj, but then you see you also have to fix the stretches which usually you do sort of my canonical by pinning the chain sort of at either end. But if I go to the canonical picture, then you see that this induces simply a linear extra potential. And if you look at the formula carefully, you see that this is nothing else but the pressure or the tension inside the chain, right? So I just, the main part of this little formula is that as promised, I mean I have three conserved quantities, I have three parameters. Later on I will put u equal to zero because I can get rid of it by Galilean invariance. And you see that the equilibrium measure is particularly simple, it's just a product. So equilibrium is totally trivial, right? All right, so this I can do. And now I can look at just the quantity which I discussed but within this classical context, namely I have this evolution equation, so I just promise you to, you know, here that there are no q's appearing anymore, so it looks like this. So you see here that there's the coupling of lattice ij to the white neighbor, here's the coupling of the lattice ij to the left neighbor, but it looks more asymmetric than what you are used to from a quantum swing chain. That's simply me because I have sort of, you know, blocked the position and the momentum at the side into one vector. And of course they have very different properties, okay? And then now I can look at this time correlation, so this is just the abbreviation for these three conserved quantities. I mean, so it's R, P and E as a function of J and T as a function of J and T. And so here's this label and so as promised, I mean, this equilibrium time correlator is a three by three matrix, right? It's alpha and alpha prime that labels for you the conserved quantity and then it's a function of space and time. So now what are you going to see? Well, what you are going to see is always the following picture. You know, this always means that, okay. Always means that you're going to see three peaks all the time. Now, the area under these peaks depends on, you know, what particular initial condition I, what particular linear combination I take here. I mean, sort of, you know, which particular, I mean, think of this as a matrix and then, you know, I'm just looking at particular matrix elements and it depends on which matrix elements I'm taking. So what I'm saying is that even, you know, allow sort of arbitrary, you know, sort of, I mean, multiplied to the left and to the right with some vectors. I mean, then you get a scalar quantity and then you will always see this thing except for the fact that, you know, it's self-similar. So maybe the area under this peak happens to be sort of very small or even silver and you only see the white moving peak but you will always see the same things, right? And so you will have three peaks. I mean, one of them is sort of sitting moving with the velocity of sound and then there's another peak which is called the heat peak which is sitting in the middle and which is sort of, you know, well, it's a particular shape, okay? So I should, here at the moment, the emphasis is that, you know, while you're at first sight, you might think that, you know, you see something very complicated. The structures, in fact, you know, sort of at least in first approximation, very simple, three peaks and these peaks are broadening in time, right? Of course, they're broadening in time. I mean, they're not, you know, they don't stay like this. And so here, in order to convince you, I sort of brought along a numerical computation. So what has been done in this system is a simulation of the Fermi-Passer-Ulam chain and what you should see is that, I mean, the color code are three different times. I mean, here's the early time, so here you see the central peak and here you see the two sound peaks. Now, you see, it looks very funny, the peak, it looks very unphysical. Of course, it looks unphysical because, you know, we have normalized the area under each one of these peaks equal to one, which, of course, in actual fact, I mean, they are normalized by particular susceptibilities and therefore it would look different. But I'm interested in order to see the principle, I mean, it's useful to normalize them to one and so you see that in the course of time, so this is 800, this is 1,300, 2007. I mean, you know, this book is moving over here, so over here, you hardly can see the green peak but, you know, sort of out here is the green peak. And of course, you know, this motion is linear but then they are broadening with a particular exponent and you see also this peak in the middle and one thing which I wanted to point out is that the middle peak sort of seems to behave somewhat differently than the others. You see the middle peak, this one has sort of rather long tails and, you know, sort of switches almost into sort of the noisy region and there's a deeper reason for that. So there is fine structure in terms of, you know, the way how they broaden but I want to emphasize that the basic structure is three very well defined peaks. So they broaden sub-linearly, right? I mean, they separate linearly in time but their width is actually sub-linear. Yes, question? Sorry, I mean, the same difficulty as Uli. If for that distance and with the poor acoustics, I cannot hear. Ah, okay, that's a good question here. So this, no, it's not independent of alpha and beta but the point is that the three peaks, of course, you know, they are always there. I mean, whatever I do with alpha and beta, so alpha and beta are the coefficients in front of, the question was how much does it depend on the particular potential I'm using? So alpha and beta, they are the coefficients in front of the X to the power of four and the X cube, right? What was that? Excuse me? It's three by three matrix, yes? Oh, you're talking about, sorry. Okay, so I missed the question. Okay, so now, the question was, does it happen? Okay, so let me write this down maybe more clearly. I mean, so I want to have a scalar quantity. I mean, so let's say I write S alpha prime of J and T and then I have some vector psi of alpha prime, psi of alpha, some over alpha and alpha prime, okay? And now I can ask myself, as I change this vector, I mean, does it change, right? And the answer, yes, it changes but the only way how it changes is that it changes itself similar in the sense that the area under the curve, of course, depends on which way I twist the alpha. So I can twist the alpha in such a way that I see no left moving peak or I can twist it in such a way that I now see no central peak. But the point is that it will be always a linear operation. So whatever I do, I will always see sort of qualitatively this picture except for the weights of the various contributions, right? Well, that's going to come, yes. Okay, so this is what you see. And now, okay, since you asked already, what is the broadening? When there's a lot of things and, you know, three years ago I actually lectured exactly in the same boom and I had six hours and I explained all the details of this beautiful picture. But today I'm just interested in one fact, namely, when you look at the broadening of this peak, of the sound peak, then the claim is that it will be KPC-like, okay? So the broadening of the sound peak generically will be as you would compute from the KPC equation. So this I guess I'm going to explain in the next slide. Yes, okay, so now I want to explain the connection to the stationary KPC equation. Okay, so let's have a look. I mean, so here I've written down for you again this equation and you see, I mean, this equation which sort of describes this growing surface and therefore it's non-stationary. I mean, you know, this surface keeps growing and growing. I mean, it doesn't settle to anything. So you might wonder how can I do stationary? Well, it's a simple trick which you know very well. I mean, think of a random walk. I mean, it's non-stationary because, you know, as I go, I mean, you know, the mean square displacement will grow indefinitely. So what is stationary in a random walk? I talk about, can I talk about the stationary random walk? Yes, of course I can. But stationary are the increments. You see that, you know, the steps which I'm doing, they can have a stationary statistics. So rather than looking at the walk, if I look at this gradient, it's time derivative, this will be stationary, right? So it's the velocity which is stationary. And the same thing over here. If I take a spatial gradient, so if I take height differences, then this will be stationary. Now taking gradients is easy. You see, so here I take this derivative. And so, you know, it just sort of goes through everything. I mean, so I have u, which is the derivative, and now I have everywhere a derivative so I can pull it out. And now I get a nice sort of, again, very generic equation. You see, it has the form of a conservation law, which I would like to have. It's CD-u is CD-x of some carbon. Now this carbon has a linear piece, which sort of, you know, the one which we would naturally down. I mean, there's sort of intrinsic noise from the system. I mean, you know, then there's some, you know, fluctuation dissipation theorem, which helps you that, you know, once I have a dissipation, this is the friction term, then, you know, I should have noises. So this is sort of very well known. And then, you know, there's a sort of a nonlinear piece of the carbon. So just saying that in general, you know, there's no reason to expect that the carbon should be a linear function. And therefore I put in this slowest nonlinearity. And again, you know, the claim I emphasized this before is that if I simply drop the nonlinearity, then I will get Gaussian behavior, which is sort of qualitatively wrong. I mean, in order to explain the phenomena, I really have to keep the nonlinearity. Anyway, so you look at this equation, and then you have to do a little bit of exercise, and you find that this equation indeed has a stationary solution. And the stationary solution is that the U, you know, so this is now something which doesn't change in time, but I have to tell you what the statistics is. And the statistics is simply that this function U of X is actually wide in space. So it's Gaussian, and it has this particular correlation. That's really a stationary solution of this equation. Well, now I know the stationary solution, so now I can write down my time correlation. That's the U of X and T, U of U is zero. And for this, from the KPC equation, I mean, you know, this is sort of somewhat difficult proof, but I mean, this is actually a serum for not only for the KPC, but I mean, this was the one which came latest, but before we had for other models, but in any case, I mean, it's a difficult serum and sort of rather intricate. But anyway, what the assertion is that there's a normalization in front, which is just the integral over the X. The integral over the X is conserved because it comes from a conservation law, and this is just a static sensitivity, so I call it new. I mean, so, you know, the integral over this function is normalized to one. I mean, this is why I need this factor, let's see, this should be maybe a minus here, right? I mean, so then when I do the integral and the X is equal to one, you know, that's a minus, I just missed this. And the crucial point is that we now, you know, so this tells you that these sounds peaks are broadening with T to the 2-3, that was your question, but we also know what is the exact scaling function. Scaling function is, you know, a quantity which I compute on the basis of this stochastic Berger's equation, and it's a particular function here. I missed, so maybe I was lazy when I wrote this transferency. I mean, so I missed here the index KPC. I mean, so it's that function. And I'm not going to show very much of it. I mean, you know, it's normalized to one, it's positive, it's an even function, and it has tails which go like XQ. Again, you know, it's given essentially in terms of some infinite dimensional determinant, which I'm not going to write on. So my main point at that stage is that on the classical level, we know very well, and we have, you know, numerically and to some extent, theoretically, well-established evidence that when I look at these moving sound peaks that they have a very specific scaling form. They're broadened like T to the 2-3, and their shape is an explicitly computed scaling function, asymptotic shape. I mean, you know, the shape which you see is not, maybe I should say this here. You see, I mean, look, look, try to look at the green line. You know, it's still not actually completely perfect. I mean, because, you know, it should be a symmetric function, but it is not really yet a symmetric function. So here we are still a little bit away from the scaling regime. But if you would lay, you know, another, I mean, we have simulations for even longer times, I mean, then you can see that it becomes more symmetric. Okay, now here's a short word of how this works. I mean, maybe I make this very short, namely, it's sort of based on some nonlinear version of fluctuating hydrodynamics, and you know, this stochastic burgers equation is sort of one particular case, and somehow the argument is that if we wait long enough, this peaks the couple, and the sound peak by itself can be written in terms of this burgers equation. But basically, forget my explanation. I mean, it's too short anyhow, and it's not so important for what I'm trying to say. What is more important is that these time correlations obviously have a ballistic component. I mean, they are moving outwards. And you know, I'm not sure whether I'm sort of making connections sort of very well here to the quantum systems, but maybe to some of you, you know, usually the presence of ballistic component is detected by the so-called through the weight. I mean, so let me just introduce this terminology because it's very widely used in the quantum community. And so I might just explain it into this context. So the way how you, I mean, one way to define the through the weight is the following. I mean, so I'm looking now, you see here, I looked at the conserved quantities. Now I look at the carbon-carbon correlation. So this is the carbon across the origin, at time zero, initial time, and this is the carbon across the lattice, at the lattice I j, at time t. And I can look at this carbon, and I want to look at the total carbon, so I'm actually summing over all j. I mean, this is a well-defined sum because of the Robinson, this actually decays exponentially in j, and so I can do the sum once I truncate in x. Okay, so I have, this is the total carbon-carbon correlation as a function of time. Now, ballistic transport is reflected by this function not decaying to zero, okay? So if I look at the long-time limit of this function and I discover that it's not zero, then it means that I must have some ballistic component which is moving out. Now, which ballistic component one doesn't know? It's a sum rule, I mean, so it doesn't tell you much of a detail, but it tells you that there must be some ballistic component. Okay, so I just want to say that, you know, one conclusion of all these discussions is that the druide rate is definitely non-zero and just comes from the two sound peaks, right? So I can compute the druide rate explicitly, but it's not very important. I just want to emphasize that, you know, there's sort of a probe which is very useful and which in the quantum community is sort of mostly used. All right. Okay, so now I want to come to the quantum chains and I just want to ask the same question, right? I might take a quantum chain, I have some concept of quantities and I want to compute the time correlations. And now I discover something which, you know, I'm still puzzled, I asked many people, including Uli and many other, but the net result is that we simply don't know. So I want to have the model non-integral. See, I want to have this moving sound peaks, so I want to have a non-zero druide rate. And it seems to be that as soon as I impose non-integrability, the druide rate is always zero. Maybe we are just not clever enough to find a model or maybe it's some basic fact about quantum spin chain. So it seems that, you know, when I write down a quantum spin chain, I never have any ballistic component. That's of course not very good news because then I'm not going to see the KPC behavior, right? But let's dwell a little bit on this. So once the druide rate is non-zero, then the most natural thing to say as well, you know, all these complicated non-linearities, I mean, you can just forget, I mean, I just look at the linear equation and the linear equation predicts for you simply a Gaussian peak which is broadening like square root of t, just what you would like from the fundamental solution of the heat equation. And so maybe that's what it is. So maybe in quantum chains, I mean, we only find diffusive behavior. Now I have a coworker which, you know, is very good in these numerical things and so he knows how to do the CTMRG and all that such kind of stuff which I sort of wouldn't know how to even touch. But he's very good at this and so, you know, we have been sort of, you know, he's at the moment, he's interested. I mean, so, you know, a lot of discussions and so I asked him whether he could do, compute for me the density-density correlation in a simple system. And so I guess here I put hardcore bosons. I mean, you can also think bosonic, harbor chain and we took three states per site, but I mean, silver one, two. And the system size is roughly 100, maybe in some simulations, 150 and we're working at infinite temperature which is sort of the easiest case. And so now you see the 2018 versions of the pictures with which Ulrich showed you which was sort of like, you know, a little much, much earlier. And so this is what you find. So, you know, these are just almost perfect Gaussian peaks. So the idea that when I take a quantum chain and when I, you know, silver temperature might be something different but then if I look at some finite temperature, I mean, not too low temperature, you know, that there are no sound peaks and you will see diffusive peaks and, you know, they broad like, like square root of T. And of course, you know, there are numerical simulations and limitations and so, you know, you cannot go to much longer times and all these kind of things. But I think that the numerical evidence is very clear and then everybody who works in the area is sort of, you know, even without these pictures is very much convinced that this is what happens. Now, of course, you might have several conserved quantities but then, you know, the search is just still the same. I mean, then of course, you know, you have now two quantities but each, you know, it will be two by two matrix but each matrix element will sort of expand like a Gaussian, right? All right, so, bad news, okay? No propagating waves, no KPC, nothing. Okay, and so the last thing which I want to explain to you is that life is actually a little bit more complicated and in fact, there is sort of a somewhat subtle way of how to escape this but unfortunately, sort of, it's a little bit subtle and whether we can actually sort of really implement this numerically, I mean, we don't know and, you know, I mean, experiments sort of seem to be even, you know, further apart but I saw the mechanism is sort of of channel interest and so, here only I want to show you this picture. I mean, so here, this is the version of what you showed before, you know, just a little bit later. Okay, so now I want to come to this sort of, so, you know, again, I mean, I don't want to make it too specialistic so I just want to emphasize that there's some channel feature which I think actually is interesting and which one should look out for and which I think is much more channeled beyond the specific sample which I'm trying to tell you and it's that case which we have studied. So actually we have studied two things. We have studied classical spins where we see the same phenomena but, you know, I want to keep the material sort of short and we have studied in equal detail, I mean, the discrete nonlinear Schrodinger equation so the two slides which I'm going to show you are actually for the DNLS, okay? So this you should think of sort of like, you know, semi-classical limit of a bosonic field theory, right? I mean, that's basically what it is, okay? So it's on the lattice. I mean, if I would put this model on the continuum then it would become integrable and it has a totally different behavior, okay? So now what do you do? Well, I mean, so here I've written down the Hamiltonian so subside will be complex valued field. I mean, sort of your wave field, wave function if you want, so it's the wave field. So at each lattice side I have one of these variables and this is sort of the usual kinetic energy and then there's a nonlinearity which is psi to the power of four, right? And the tree is positive which is usually called the defocus in case. So if I would put non-privile commutators then I'm to a quantum field theory but here, you know, these are classical variables and so it's sort of like a classical limit of this. Now the canonical conjugate variables up psi and psi star and so if you write down the equations of motion, I mean, according to this prescription, I mean, of course here you find simply, so you have the i and you have simply the Laplacian and then you have the nonlinearity which you would expect, I mean, just the derivative of this object which is psi j absolute value times psi j. So this is the evolution equation for this non-linear equation. Okay, now I follow again the prescription. I mean, you can write down the two conserved quantities. I mean, energy and density appear, I can write down the e to the minus beta h. I mean, in this case, yeah, a little bit coupled so you have to work a little bit more on the phase diagram and things like this but anyway, that's not very hard. But if you write it like this, I mean, you don't find any intuition and so the good way is actually to introduce sort of like, you know, going to polar coordinates and then here I put the square root because then this transformation is really sort of canonical so I have amplitude and phases and I can rewrite this Hamiltonian in terms of amplitude and phases but remember that once I have done this coordinate transformation I will have boundary condition. I must make sure that the phi is really sitting on the circle and then there's a boundary condition at zero so when I write down the equations of motion there are really boundary conditions. Anyway, so you do this transformation and what you find is sort of something quite simple. I mean, you find a coupling which is not a cosine so you can have a nearest neighbor coupling between neighboring phases and here you have like an onsite potential and I smuggled in this chemical potential and you see that if I put the chemical potential sort of in the right way, I mean, if I make this with this sign sort of very large I mean, you see that it's actually sitting in a Mexican head. Now, of course, we are in one dimension there will be no phase transition but dynamically, I mean, you will see this and let's see, I just want to show what I want. Yeah, so now I'm still looking at this equilibrium distribution and so you see that if I make this at high temperatures so I'm very far away above this Mexican head. I mean, then I will get just a uniform distribution of phases and sort of completed this auto behavior but if I'm now making the temperature very, very small and if I impose the constraint that the average length of these fields should be equal to one then the typical configuration will sit at the minimum of this Mexican head. So, you know, if I look at one typical equilibrium correlation then, you know, this will be maybe one value and that's the next value and this sort of from side to side it will just change very, very little so it will make sort of like a diffusion in inside the Mexican head. But to actually tunnel, you know, outside the Mexican head or the tunnel from here to here, I mean, that's sort of, you know, extremely unlikely. So now dynamically, this is reflected by looking at what people call the Um Club and this is sort of basically saying that, you know, I look at these phase differences and, you know, I look when we've been able to phase so maybe I should show this. You see that, you know, the phases have this cosine potential and so the Um Club means that basically, you know, I'm sort of like an inverted cosine so if I take this value to be plus minus five then I'm sort of going over the barrier in the Mexican head. Okay, and so you see that, yeah, so I have this Um Club processes which sort of, if you define them and then it means that I have this plus minus five and of course there's nothing classic happening. I mean, I should sort of continue on the other side but the point is that this event at small temperatures is extremely unlikely and so what you discover is that when I look at phase differences at small temperatures, they will be almost conserved. So you see your naive computation of conservation laws was certainly correct but they completely missed this feature and so the point is that the phase diagram is sort of slightly more interesting, namely if I'm going so maybe not at civil temperature but at some intermediate temperature then I discover that the phase differences actually are almost conserved. I mean certainly, of course they decay with very, very slowly decaying exponential but the weight is so extremely small that it's way, way beyond what you can reach numerically. I mean, it's really behaves just like a conserved quantity. Now, conserved quantity basically if I really want to make it conserved rather than sort of continuing periodically I would just simply reflect that the two boundaries of my cosine potential you see what I would do is if I really want to make the perfectly conserved model I mean what I would do is I have here this cosine the inverted cosine, I mean something like like well, if I sort of maybe something like this okay and of course the umklap goes across but then I can put here sort of infinite barrier and then simply let it reflect and if I do this then I get a model which has truly three conserved quantities and it's this model which correctly describes sort of the low, I mean not very low but low temperature behavior, okay? So now I am in shape again because you know that's exactly the type of unharmonic chains which I discussed of course you know slightly different variables and then sort of but the abstract structure is very similar in particular what I find is that there will be none zero through the weight, okay? So now the prediction is that when I look at this discrete nonlinear Schrodinger equation at high temperatures I will see the diffusive peaks just as we have seen in the Bose-Herbert model but when I go to low temperatures I mean not very low but low temperatures then suddenly there will be like an extra conserved quantity and these phase differences are conserved and they produce for you sound peaks which are propagating, okay? And so I think I now have yes, so this is an numerical simulation and so the origin is sitting somewhere over here and here you see the sound peak propagating and of course I mean, you know here it's just sort of one particular simulation I may be analyzed more precisely how this broadens and you know as you would expect from the theory it broadens like T to the two thirds and you look at the shape and you find that it's very well approximated by you know what I called FKPC I mean this function which sort of has a decay which is somewhat faster than a Gaussian, okay? So the predictions of what we did for the classical system is also sort of verified in this particular system. Okay, so I think anyway it's not so bad I'm a few minutes earlier maybe I didn't explain all the details but let me sort of repeat maybe a little bit what I was trying to say I mean so what we would like to understand you know are these non-indicable I mean equilibrium time correlations in non-indicable quantum spin chains and then there's no question that when I look at high temperatures or in particular at infinite temperatures then I just will see the Gaussian peak. However, there's sort of like a way out I mean you know I can sort of get so for instance one model which I think would be a good model if I look at the XXC chain with S different one half so it becomes non-indicable or maybe I can take also one half chain but I can sort of break it in a somewhat different way I mean introduce next near a similar coupling then of course I still will have only sort of the naive to conservation laws but when I put the parameters of the XXC model in such a way that I'm in the easy plane case so the coupling has to be made in such a way that the phase differences want to basically move sort of in the plane of solvent to the three direction then this would be the candidate for having sort of like an extra almost conserved quantity and then sort of KPC theory should apply and what you should see are propagating sound peaks. You're lucky I mean they should have the KPC behavior you know actually Christian actually did simulations and I mean of course he does find structures which are actually sort of like sound peaks they are expanding in time but they are not really sharply peaked I mean what we see on carbon in the numerical simulations is that we have a reasonably well defined sort of you know maybe I should explain this but what we see in the simulation of course I mean we do have here sort of a cutoff which goes like CT minus CT and plus CT so now I'm just plotting something like a density-density correlation so out here it's several and then we have some peaks over here but then you know we have sort of like whatever Friedel oscillations or some oscillations in the middle and it looks like this so I mean you know we do have sort of something which is ballistic I mean that's the first step but the ballistic so there is definitely ballistic component but the ballistic component so far cannot be really connected to a sharp sound peak and even less to a two-third exponent and then even less to KPC scaling function so thank you very much