 OK, so I want to comment on three things that are, as you can appreciate, I am not a string theorist, but that are related to other kinds of physics, and I think that there are some points that at least will justify why I'm interested in the subject. So number one, so I wanted to understand a way to understand simply the bound to chaos. So as you know, classically there is no bound to chaos, so one would ask how does the H-bar enter into this game. So one way to introduce a measure of chaos that is quite traditional in quantum people is to do a Losch meet echo experiment. So what you do is you start somewhere, you evolve, and then if you came back you would come back to the same place, both quantum and classically. But if you go and then you give a little kick and you come back, then the two trajectories, the outgoing and the incoming are going to differ by a little bit, which will be amplified if there is chaos. This is what is called the Losch meet echo context. So you go, small kick, and come back. Now take any operator, measure it, you want to see the difference having done this circuit and not having done this circuit. So this is the trace of A squared and the trace of A times A transformed. This gives you a measure of how different the quantum or classical situations are before and after. And it has to do with chaos, and indeed you expect it to go exponentially with a time independently of who are the operators, which will enter only in the pre-factor, and independently of the nature of the kick. This is already a nontrivial statement, the fact that this you get an exponential that doesn't depend on anything except the system itself. This is something that was known since, I don't know, some 15 years or something like that. And what is nice about it is that there's plenty of simulations and experiments so that we know that this regime, because of course this does happen only for a regime in time, we know that it exists. Yeah? How can it be completely independent of the operators? Oh, I can tell you why. Because the operator you're doing, or let's say the operator with which you do the kick, is going to give you a kick in some direction and then this is going to amplify. Now one thing we know about Lyapunovian stability is that whatever the initial thing in the end, because it is chaotic, it samples every possible direction and it amplifies at least if you only care about the exponent, it amplifies in the same way. This is just Pesin's theorem, the existence of Lyapunov exponent. If you take the operator to be the identity operator. Definitely. So you have to, it depends on the operator, the regime in which it is independent of the operator, of course, will be hidden, I mean the times at which something will, this regime will happen. But there is such a regime. That's the important point. I mean, if the system is chaotic, there is... We're restricted, right? Oh, you will get the restriction in a second, I promise. So, okay, so now let me expand this difference. Well, as usual, you expanded the B that was in the exponent, came down. This was a kick, blah, blah, blah. And then it takes a line to get to this formula, which is the Larkin of Sheeniko formula, the famous four-point function, okay? Yeah. On the previous slide. Yeah. With Loschmitt, actually you get exponential decays, not exponential growths. No, no, no, here, yes. Oh, sorry, yeah. No, here what's happens, this is zero at equal times, the way I defined it. I agree with you. In the calculation of Loschmitt, Eiko, Offscher, Laverde and Pastaski, you get a negative. But in this calculation, you get a growth. And look, this, when the times are equal, A transform is A, and this is zero. But you do get an exponential growth and you will see it in a minute. But I agree with you. The usual calculation of Loschmitt Eiko is you have a lump, you go, you come back, and then the overlap goes down. But the way you do it here, in any case, it takes a line to get from what I wrote to this, your old friend, okay? So, as was remarked, you want this to be localized. You cannot take the trace of anything. So one way to do it would be the most traditional one is that the operator A is a lump, is something that has a support in a bounded place in face space. The other way is you could choose this. You see, you take an operator A with anything smooth, but you multiply it left and right because you want it to be a Hermitian by this quarter of the partition function. And this is the choice that Maldassina, Schenker, and Stanford used. So if you plug this inside here, essentially, you get their four point function. Up to two point functions which are not interesting, because they just subtract constants away, okay? So here you see the connection is, I don't know, this has been implicit in, as far as I remember, Kitayev's talk. He mentions, I'm not being original here, but I think it's a nice way to see it. Sorry, can we go back to one of the precise connections with Lashman? I don't see it. So you go, you take an operator A, transform it according to this transformation, you see what you're doing. You're going, evolving with T, H, giving it a little kick. It's an evolution with B, and then coming back with minus the Hamiltonian. So if T is zero, and of course, B is not there, or even if B is there, but this is a very small, A is almost the same as a transform. If B and T are zero, clearly they are the same. So that this subtraction is going to be zero. As T grows, this is going to modify, this transformation, which is the Lashman transformation, is going to modify this operator A. And now I'm measuring the difference, this is just a kind of scalar product. I'm measuring the difference between the untransformed thing and the overlap between the two. It's unconventional from the point of view of Lashman's echo, but it gives you directly Larkin of Sheenikova. I guess that their inspiration must have been something like that. Okay, so the bound, as you know, is that this Lyapunov has to be smaller or equal than this quantity. Okay, so as you know better than I, it's nice because it's a symptom, and a symptom of something. Black holes have this symptom, SYK model has this symptom. So that's the reason I think why it is interesting for you. Okay, this I don't need to tell you. And then the proof is a proof, the one that they gave, is a proof that you can follow. It's not hard, it's three, three, four lines, it uses elementary stuff. Still, my personal feeling when I read it is that I understand each line, but it doesn't appeal to me in a concrete way, but that's me. So I wanted to do a model where this could be understood and do semi-classics. So I know that I can have the Lyapunov I like, if I do classics, classical. Okay, what happens when I turn on H bar? So, okay, I want H bar small, I want to do semi-classics. But if I do H bar small, as you see from the bound, the quantity that matters is beta times H bar. So, evidently, the temperature has to be very low if I want to get something interesting. So I'm forced, if I want to do semi-classics, to go to very low temperature. Now, what happens when you go to very low temperature in a normal system? Well, typically there is a ground state and then you're sitting at the bottom in the ground state and you're not doing much. So, of course, you don't have chaos, but it's not interesting. It's like a crystal, I mean, you crystallize your system. It just vibrates, you cannot do much more than that. So, you realize already that the potential like this is not going to be interesting. It's not going to violate the bound, but it's going to satisfy it in a trivial way at very low temperatures. So, what you need is a potential that is chaotic. This was mentioned by Dima a minute ago, that you need a potential that has a flat bottom so that you can still move and be chaotic or have a chance of being chaotic at very low temperatures. Potentials with a manifold that is flat. This is traditional for solid state people, because when you want to confine electrons in a quantum wire, you do exactly that. You make a potential that forces you to be on the wire and then along the wire you move freely, okay? So, this is very, very traditional and the natural thing to do is to parametrize the bottom of the potential and then care only about that bottom and forget about the rest. So, you do some, you write in coordinates, appropriate coordinates, the manifold, whatever it is, which we will suppose it has more than two dimensions. And in the appropriate coordinates, you get a Riemannian form and then you get a thing like this. This is just ordinary C number quantum mechanics on a curved space. So, this is going to be our model. This has some chance of being interesting because even at very low temperature, you can still move here. And hence, you can go away and be chaotic. Otherwise, okay, so now you quantize this. This has a history amongst other things. One of the most beautiful examples of quantum chaos is a two dimensional hyperboloid and a particle moving freely on it. It has a very, very long tradition starting with Good's wheeler. Okay, so how does it look like, classically? Well, classically, as you probably know, when you have a surface like this and you have three particles moving on a surface, they move along geodesics. So, they take the geodesics. If you fire the same particle with twice the speed, it will move along the same geodesic, but faster. Okay, so this is what it is. And then the question immediately is, okay, fine, and now I can fire it as quickly as I can. And then I can get the Lyapunov I like because trajectories, if I move faster, will separate faster. These two trajectories in space, they are written on the space. I can transverse them at different velocities. Okay, if I do it very quickly, then this separation is going to grow very fast. And this is the Lyapunov exponent. So, what stops me from going very fast and having the Lyapunov I like? Classically nothing. So, here goes the semi-classical calculation I did. You want to calculate this four point function, the one that came from the Los Miteco if you like. Well, you want to calculate it in the semi-classical approximation. So, what you do is you plug in fanflex formula for the propagators. It turns out that these pieces, the one that contains the temperature, are diffusion, simple diffusion on this surface. You can convince yourself that this operator, the free Laplacian on a surface, is just diffusion, but there's very short episodes of diffusion. The way it is scaled, and this is what you get. You get four times the same trajectory and two punctuated by four short diffusion episodes. You do the calculation, it's not hard, it's tedious but not hard. And what you get is what I'm going to describe without the calculation. So, without the calculation what you get, you can see what you're going to get very easily. So, the geodesic deviation written in space, the separation of two trajectories, is given by an equation like this. It's the separation. This I took it from a textbook in gravitation. Anyway, the separation is a function of Riemann's tensor. That's the textbook of this word. No, I pirated it from the web, so I cannot tell you about its finance patterns. Yeah, okay, so this is Riemann's tensor, it doesn't matter. What you get is that the separation between the trajectories scales exponentially. So, it's two, if you want railways drawn on this manifold that separate exponentially with a characteristic length. What I want to say here and is important is that if you do this in n dimensions, this length scales with square root of n. You can convince yourself, but the reason is very simple. A Lyapunov thing is an intensive quantity, it's not an extensive quantity. Each coordinate expands and it expands in the same way. And what you're taking essentially is the maximum of the expansion of all of them. So, if you want to do it and because it is divided by the original, the fact that it's n dimension doesn't change anything. What does change is that the distance you move to get the same separation is the combined distance in all the coordinates. This is where you get the other square root of n. Believe me, it's true, you have to do a picture and then you convince yourself. But don't you need negative curvature for this? Oh, yeah, yeah, yeah. If you want to have a Lyapunov exponent, I should have said this. You do need that somewhere in your space there are bumps with negative curvature. Yes, absolutely. If not, it's going to be true but boring. Okay, so if you want the Lyapunov Lyapunov, you have to multiply the speed and divide it by this separation. You're taking the two railways with faster moving trains. And then it's easy to see that this is what your Lyapunov Lyapunov is. It scales correctly with size. And then you begin to compare. Let us, now this is what enters the semi-classical calculation. You want to compare this length with the phase-phase de Broglie length. Now the de Broglie length is given for a particle. So if I wanted for n particles, I have to use Pythagoras theorem. And this is where this square root of n comes from. The fact is, let me compare these two quantities. So just to see when are quantum effects kicking in. So I compare, I compare, I compare. I just put the things where they are. All the two pi's I'm losing on my way. So I have nothing to say about the two pi's. But when you get to the end, this is the comparison that counts. So for this system, but then if you look at the semi-classical calculation, you see this properly. But there is not, no, from the physics point of view, there is not much more than what I'm telling you now. Is that this is the comparison. And the meaning then of the Lyapunov exponent is very clear. You have trajectories written in this space that separate at a characteristic distance given by this L0, which it's typically something to do with a negative curvature that you have. It's that kind of scale. And you have to compare it with the De Broglie length in this phase space. And the meaning of the bound is simply that you cannot scatter along a distance, a wave front, along a distance that is much shorter than the wavelength, which to my mind sounds perfectly reasonable. So at least, I don't know, I'm not saying that this wasn't implicit in the literature. It was, but I think that it's a good way to see it, a nice way to see it. So can you clarify better the square root of L? Yeah. Of course, I will take C9 billion, not in two dimensions, in three and four, five hundred dimensions. So I would say that my collision time, typical collision, in which I start to deviate, it will be still the size of the box, you just divide it by the size of the box. The dimension will not enter here. Put it this way. Put it this way. You want to convince yourself that the Lyapunov exponent is something, the maximum, the maximum. We're talking about the maximum. There's also a notion of the magura pentagon. No, no, no, no, no. I'm only a patient to be the sum of hope, was it? I know this, but no, I want the maximum. So the maximum, you want to say that it scales like one, not like n, which is something that is not been proven proven, but we all believe. So there are cases where it's not even true, but mostly we all believe. In order that that is compatible with trajectories being written in space and transverse at speed, you can convince yourself that in space, this has to scale square, that has to scale within. What's the opinion on how the curvature scales at the moment? I mean, you're changing from one manifold to another. Sure, sure, sure, sure. But my curvature, I keep it as a parameter, and you will see it in a second. I'm not saying for the moment anything of how long. Is it the Riemann tensor or the Riemann tensor? If you trace, of course, you're going to get the curvature. I keep it order one per degree of freedom. The curvature length, I keep it order one per degree of freedom. Yeah, so it's like Riemann is scale. Yeah, so what is it at the end of the day? What does this mean? Well, this is the semi-classical computation. I put units into this. So on this side, I put the quantity that is bound, h bar, beta, the Lyapunov. On this side, I do 1 over the De Broglie length. So here, I'm very classical, and I'm getting quantum as I move to here. And this is just 1 over L saying that your quantum Lyapunov exponent is equal to the classical Lyapunov exponent, which is OK for the semi-classical limit. This is what my semi-classical calculation confirms. It's not a surprise. Then you go up, and then typically, this breaks down the moment that this thing becomes comparable with either the curvature or the typical Lyapunov length. They are proportional. So once you get here, your wavelength is of the order of the features of your potential, and then you no longer can use a semi-classical approximation. It's not true. And what happens? Well, I don't know exactly what happens. One of the things that was pointed out to me by Levioff is that localization effects appear. So no longer the image of a wave front has any sense. I am not even sure that here the notion of Lyapunov, it's not that the Lyapunov is small. Maybe it doesn't even exist. For a Lyapunov to exist, I need an exponential in time. And I cannot even, I don't even know whether there is one. My feeling is that, and this is what happens in such a year, to get something that is interesting up to zero, you need that your system hierarchically has all the possible lengths. In other words, that it is a critical system. And you can think of critical opalescence when the system is critical. It scatters light in all frequencies. And this is why it's white, the liquid. So I think that, but this is just blah, blah. You need to have a complete hierarchy of lengths and enhance be critical in order to take advantage of all the possible lengths as you go to zero temperature. But that's OK. Yeah. So does that contradict the idea that if you have a non-extremal black hole, you still expect to have this bound, right? Sorry. And the black hole department, I cannot tell you anything. I think the bound for chaos works. The bound for chaos is true always. And it's saturated for any black hole. So you don't need to have an extremal black hole. Whereas the non-trivial infrared conformal, sort of, one last work only in the new. OK. OK. So what you're saying, if I translate it to my language, maybe criticality is not necessary. I don't know what happens here to tell you the truth. The only thing is that this guy who comes very nicely with a 1 over T, 1 over, it has to stop. It might either turn round. But one thing that is interesting is it becomes of order 1, which means that even in a model or in a system, you find something of order 1, but not the bound. Don't be so happy about it. It's sort of normal to my mind. OK. Let me, let me, now I pass to another thing. If you want to ask some other question, I'm going to make a slight intermezzo on glass theory, and then I'm going to come back to something like this. So please stop me if. So a little bit of prehistory, very little, just to be able to justify what I'm going to say at the end. So this family of models, you recognize, this is a Gaussian, random thing. These are bosons now. These are C numbers. And because they are C numbers, you have to constrain this somehow so that otherwise this diverges. So one traditional way is to make these guys B plus or minus 1, constrain them to B. You could add a quart or some power to keep it that way. Or more nicely, you can make them be on a sphere. Make them be on a sphere is something that takes you very, very close to the study of tensor models, because it's a recipe whereby you do not break rotational invariance. So it's interesting. But this tensor, unlike the ones we heard yesterday, is symmetric. And if it were not, only the symmetric part would count in any case. And the quantum version, well, you see it's not big deal, has also been studied. This is 80s. Why is it interesting? Because surprisingly enough, these models are excellent metaphors of what glasses are. They give you a kind of mean field of glasses. Glasses are made of particles. You don't see any particles here. And there are no particles here. But believe me, and it took 20-something years to be really convinced, these are baby glasses, which you can beat to death. And we know a lot about these. OK, what do they do? The quantum one, this is a phase diagram, H bar T. So there is what I would call a liquid phase. You probably would call it metallic phase. And there is a glass phase. What do I mean by glass phase? I mean that if I start in a random configuration and put the system in contact with a thermal bath at a temperature here, it will never equilibrate. It will take infinite time to equilibrate. Infinite in the thermodynamic limit, time to equilibrate. This is the definition of a glass. In the quantum case, how do you bound delta? Because there was no. In the quantum case, you still put it on a big sphere. OK, but the Hamiltonian was not on the sphere. Sorry, sorry, should have added. Yeah, or of course you can do the potential that bounds you there and square it. Yeah, yeah, sure, sure, you need it. OK, how do I know that it never equilibrates? Because I write a correlation function. And the correlation function, instead of tending to something, I plot it with different starting times. And as time passes, the correlation time gets longer and longer. And it never stabilizes. So if I start the correlation after an hour, it takes an hour to decay. If I start it after 10 hours, it takes 10 hours to decay and so on and so forth. This is called aging. It is just telling you that the system is getting more and more correlated with time and with a never-ending process. This is this region. There are many states which are almost degenerate. You will have plenty of that in a few minutes. But what I want to say about this is that when we solve this, we could solve this part easily because there is a fast part, just like in SYK. And then this slow part has the famous pseudo-parameterization invariance. It is as pseudo as SYK is pseudo. And the difference is that the parameter is not the temperature. The parameter is the age. At what time have you started your thing? Very quickly, just to tell you that this for us was an enemy and not a friend, because we were never able to do the matching of which is the good reparameterization that matches the other regime. It is a formidably hard problem. I am reading your papers because I want to see if I can steal some techniques from you to do it. So my case on how I understand, Sajdev, he wants to have not a spin glass. I'm coming to this in a second. I am coming to this in a second. Yeah, indeed, this is bad for you and bad for Sajdev and bad for Antoine Georges and so on. These people wanted to have something that was still a liquid at zero temperature, a metal at zero temperature. And so this is bad because you don't want this because you want in order to profit more from quantum nature, you want it. This phase not to be there. So this is not a model that will interest you. I'm telling you this because from the prehistory point of view, it explains why I'm here in part. We know a lot of things about this reparameterization. And it's not exactly the same. This one is in real time. Yours is in imaginary time. It has dimension zero, for example, not 1 over Q or whatever. But we have and we know some things that you don't see in your literature about this one. But we do not know some very many other things. And it's interesting for us. A sigma model of the reparameterizations was attempted. I believe now that I read the Schwarzian one of your literature, I believe that it was incomplete. But it was a very nice try because this reparameterization has a physical meaning and plays a physical role here. OK, this is very quickly just so that you know. So that you keep it in mind maybe sometime you will have more time to. Now a model. As you see, this model is not nice for the reason that was pointed out. So how do I do a model? Can I do a model with these glass things that at least is a cousin of SYK, a bosonic cousin of SYK? So here is my model. You have to construct it by levels. So you start with the Hamiltonian I did a minute ago. Let's put it on the sphere to make it easy. And I consider stochastic dynamics for this. So stochastic dynamics means that I do gradient descent plus a little bit of noise. If the system were not a glass or where it is not a glass, this allows you to go to the Gibbs Boltzmann measure. And this has been studied ad nauseum because this is the one that we use when we want to show that the system is a glass in this region. Solving analytically this dynamics for that model, for this model, you show that within the glass phase. Sorry, the classical version in this case. You can do the quantum too. It shows you that inside this region, the system never thermalized. Never thermalized this. So that's for the moment not what we want. So because you have a Langevin equation, you know that probability this is classical but noisy. Probability is a probability cloud that evolves in phase space. But please do not confuse this with a quantum wave function. This has nothing to do with wave functions. This is just your ordinary Fokker Planck equation for a distribution of probability evolving in phase space. For the moment, I haven't got the model I want to use. Is there the symphetic form in front of the? Not necessarily. I didn't put the p square. You could have put and then you have the. Just for the Hamiltonian evolution, this is really gradient? This is really gradient. You could do it selected too. You could you could do both. But yeah, but in this case, both have been studied. Both have been studied. Yeah, it will change. But for what I am doing, it's better to keep it first orderly. You will see why. OK, so this is the evolution. Now, one thing that perhaps you've seen, it's completely standard, but you have to take my word for it. This is a differential equation. It's a Fokker Planck equation, which gives you the evolution. This and this are completely equivalent. This L is the one that makes your probability evolve according to the ensemble of these trajectories. So if I write p, sorry, if I write, now the question is what is the form of L? Well, the form of L is what it is. I can write it for you. No problem. But what you should know now, maybe if you haven't seen it, it's not immediately obvious, is that if you change bases with an Hermitian, sorry, you change bases with this transformation, this operator can be written like that. And this is the Hamiltonian. And this is the Hamiltonian that I define as my quantum Hamiltonian, which has nothing to do with the quantum version of this one. It is taking the Fokker Planck operator as by fiat a quantum Hamiltonian. Why do I do this? Because I will try to convince you that it has a few properties that are what we are looking for. OK? But the thing is. And conjugated variables. Conjugated variables are the derivative with respect to eta and eta. So this is you. Yes. You've got this vector field, which is your gradient flow. Yeah. Is your L just a lead derivative along the gradient? I guess so, yeah. But there is noise. I don't know the lead derivative has that. But there is some noise that allows. Added in with the noise. Yes. But you're just preserving probability. You're just preserving probability, indeed. So when you change it, you get this Hermitian thing. And now notice that the temperature, why do I call temperature T s? Because it's not going to be the temperature of my model. This is now a mere auxiliary parameter. But this mere auxiliary parameter, as you can see from here, plays the role of h bar. There is no h bar in this problem, of course. It's an artificial, completely artificial thing. OK? And this is what you will recognize if I had put two fermions here, a dagger i a j. This you would recognize as supersymmetric quantum mechanics of the beginning of the 80s. It's a zero-dimensional n equals 2 supersymmetry, which is a well-known fact that stochastic equations are directly related to supersymmetric quantum mechanics. There's nothing new here. So now bear with me. Let us consider this model with the h that is here. So this is not the quantum version of this model. This is a very artificial thing I did. And now the evolution under this law of this model is something like the imaginary time evolution of this Hamiltonian. About the real time evolution with this Hamiltonian, I have no connection to anything. I could call it probably diffusion with imaginary noise, but that doesn't sound very. No, I'm not restricted, in principle. They are restricted because h contains some term that keeps you on the sphere. Because due to square. Yeah, but let's say that you have a term that is strong and that keeps you stuck on the sphere, and you include it in h. OK, so now what about this model? This model, we know everything. That's a nice thing. Just to be sure, I almost see the cube. This is positive. This is positive definite indeed. It has a zero eigenvalue with non-negative positive definite. It has a zero. It has a normalizable ground state. Indeed, because it means it's a Boltzmann distribution for the original model. So it is the lowest you can get. It means that it doesn't evolve anymore. It's zero. Which is an old statement that when you map diffusion into supersymmetry to say that the system equilibrates and has an equilibrium is the same as to say that the supersymmetry is unbroken. OK, so now what do I know? Well, my original model, the one with the j's, and it would be very, very nice if we could do things like you do. But we know a lot about this. It has a landscape that is as follows. It has exponentially many minima, the energy. You can count them. The lower you are in energy, the more stable they are, which is sort of reasonable. They are a little more round. As you go up, you meet on your way more and more and more minima. And you get to a threshold level. This growth is exponential. They proliferate exponentially up to a threshold level when they are all marginal. And then above that level, there are no longer any minima. There are subtle points, but no minima. Almost all of the levels are very close to the threshold level because I guess that you know this. When something grows exponentially, the crust takes essentially all the volume. So I don't know. I say I guess you know this because whenever I hear about the surface of a black hole, but I don't think I don't know if this has any significance. OK, now what do you know from the theory of sarcastic whatever is that when I consider my quantum within quotation marks Hamiltonian, there is a one-to-one correspondence between the states. Now quantum states, the eigenvalues of the artificial quantum Hamiltonian and the minima with their fluctuations of the original landscape. In particular, the lowest of the lowest is the equilibrium one, the lowest. And all these eigenvectors are related to the phases, the many phases, of the original stuff. Now one thing I didn't tell you is that between these states, the barriers are exponentially large, except for the guys who are very near the threshold which have a lowest. You can think that you have to go up to the threshold and down to go from one to the other. And so the eigenvalues here, you can prove that it's the one over the lifetime of these phases. So because these phases are stable in the thermodynamic limit, their barriers are order n, it means that there is ground state degeneracy because there are many, many, many eigenvalues that are extremely close to zero because their lifetime is exponential in n, and this is one over the lifetime. So you have eigenvalues, and this reminds me a lot of the last transparency of DEMA, you have a lot of things that collapse. Although there is no true degeneracy, the spectrum is made of levels that are to first order in the thermodynamic limit degenerate. And what about this? There is a degeneracy, every state, not at zero, has a partner, at least one. What does that degeneracy mean? Every, ah, sorry, should have said. The full supersymmetric stuff, what you say is absolutely true, and this degeneracy gives you that's Witten's thing for the Morse theory. But here I am confined to zero fermion subspace because this is what relates to the diffusion. But you are free, given that I'm taking the liberty of calling this quantum, you are perfectly free to take all the fermions subspaces. This, I'm not describing now, but, ah. What information sits there? The entire Morse theory. So to go from one to its partner, it's to go from a minimum to a saddle, and then you get all the Morse inequalities and everything. It's very good. The full supersymmetric quantum mechanic, but there is no fermions in this theory. Here, there are no fermions because I remain at the zero fermion subspace. But you could put them. You have them because? No, no, no, because the total number of fermions is conserved. So I'm free to look at the matrix block that contains zero fermions. That's the one that is stochastic. In a previous life, I studied the stochastic dynamics in the other fermion subspaces to obtain a method to find barriers. But that's another story. OK. So now I think I can tell you this why I say I know a lot, but really a lot, about this spectrum. Because I know almost everything about the distribution of minimum barriers here. I can tell you how many of these go like n to the something, how many of these have e to the minus n. Just what Lima was saying a while ago. I can also tell you how much they hybridize because the barrier gives you an idea of how they tunnel, you tunnel from one to the other. So you can. No, if they have random matrix statistics. Not here, surely not. Not unless you go to level separations that are exponentially small with the size. There is this trade-off between size and different. We didn't actually do the calculation about the level statistics, but I'm sure you can do it because we know pretty much everything about this. What we, one more thing that we did calculate is now the bad news or the more or less bad news. So notice that the system doesn't move in this landscape. It moves in a landscape that is more like this one. So it's the gradient squared. This is why it's positive and this correction term. So that's the effective potential of your system. And, okay, so now you can say, okay, let me take this model seriously and calculate the partition function. Partition function, you can calculate the specific heat and it gives you a power law, not trivial power law, but unfortunately it's not one like you like. The fermion nature that we don't have here in this version is the one that is responsible for the one. Here you get 1.5. But it is a non-trivial exponent. It has a zero temperature critical point in the sense that nothing happens. There is no, when you do real-time dynamics on this, there is no phase transition up to zero, just like in SYK. It has, of course, ground state entropy, which is nothing but the number of metastable states that the log of the number of metastable states that live forever. This is what glass people call the complexity. Because all the metastable states that live forever are flat in their spectrum, which is one over the lifetime, how many of these you have is how many metastable states, everlasting metastable states you have. That's a zero temperature entropy. And if you want the entropy at a given temperature, the entropy, it is the number of metastable states that have a lifetime that is the inverse of that temperature. Okay, so we are playing with this artificial construction because we know things. Now, we tried the real-time dynamics with an I. Remember that the relation between diffusion in the original model came through a Langevin process. And this one mapped to a Fokker Planck, which is a naturally imaginary time thing. If I want to put an I in the dynamics, this corresponds to nothing, but you can calculate it. At least for Q equals two, what we get is that there is a time scale that is the temperature squared and not the temperature. So it goes to infinity, this time scale, like as the temperature goes to zero. Sorry, one over the temperature squared. One over the temperature. So you know that the time scale of SYK goes like beta. This one goes like beta squared, at least for Q equals two. We are in the process of calculating it for another Qs, but we haven't done that yet. The equations you get to solve this in real-time are very similar to the ones of SYK, except that of course the correlation functions are symmetric, these are bosons, and except that you have three order parameters instead of one. Instead of having one G, you have three Gs, which are related through an internal symmetry, which has to do with the supersymmetry of the regional problem. We have not yet calculated the Lyapunov exponent, but it's doable and we're doing it so just to see what it gives. So for the moment, the only thing I can say is that you can understand very well the zero temperature subspace and what is their structure. When we'll have a Lyapunov exponent, if we're lucky enough that it is as large as it can be, it would be nice, but we will find out. So just to go back to the original stuff, you see that the fact that you need a well that is flat at the bottom for bosonic systems is realized in this model, here coming from, let's say, the supersymmetry of the model. And that's all, only, okay. So why am I interested in these things? Well, first of all, simple-minded elementary results. I think that many of these bounds are elementary results that should be explainable in an undergraduate course and I think that they should be written in this. Not the fact of saturating them, but the fact that the bounds exist should be undergraduate stuff and one has to work to make it such. Then, glassy ideas applied to these field theoretical systems, I don't know, but maybe something of interest. These models come from the world of spin glasses. And above all, I want to learn some of the techniques and the things that you use. So thank you very much. Yes, so this T.S. temperature that entered in the Langevin equation. It's become H bar of the new model. It will become H bar, but more importantly, what is its relation with the parameters and distribution of the J.I.? Nothing. You have your J's and then you apply your Langevin equation at a given temperature. Okay, so the J's are fixed once and for all. Yes, and then if you want, you can say that you tune their common amplitude, but you can say also that you just change the temperature. So the original Hamiltonian H that you wrote for the Q... Doesn't contain... It's not a disorder. It is a disorder, but the J's are random variables. Are random variables, but their distribution is unrelated to the... It's unrelated. So we heard yesterday about the Mellon models and so on. So could the large N limit be the solution of the Hamiltonian of the Mellon? Well, the Mellon was, and this is something that was pointed out, is that it's J's with, I would put it, if you want, three kinds of firm. They talk to each index. It's not the same. So this matrix need not be symmetric and that's where you can do all this beautiful classification. If I understand correctly, one could aim at doing it for symmetric once which is the one relevant for us, but it's harder. So if we get, we are, there are plenty of reasons to want to know more about these models because they are also toy models of complexity of, like, algorithmic complexity. So if we knew more, we would be more than happy and it would have more than many applications. But the mathematicians know about these things, but doing this with rigor takes, as you know, a long time. And the second question, I didn't, can you elaborate on this intuition between you showed the flat ground state and the other hand, your different discrete number of vacuums have nothing to do with the flat direction. No, you're right. So what is the connection? Well, yes. So let me put it this way. It's as if you had, your potential is the one I drew. It has many minima near the threshold, a few deeper and a few deeper. Now, this was the original landscape. Now, essentially, the potential of the, let's say, artificial system is something like the prime square plus a small correction. So now, for every minimum, there is zero. For every minimum you have, and then it has some between minima, it's not zero, zero. However, half of the minima, because this is an exponential growth, live very close to the surface of the threshold and have a very small barrier. But it gets smaller as you go up. So have an order one barrier. So some of these bumps between zero and zero are order one, which is not the same as being zero, but it means that with some temperature, you can move. But this structure is responsible, I believe, for the fact that we don't get beta, we get beta squared as a time for the relaxation of the system. Gary? If I understand it, your first construction of this path and then back together with a kick was based on a purely classical definition of chaos. Yeah, I mean, what happens is that when you do the semi-classical calculation, you discover that in the region where this semi-classical approximation works, what you need to compute is four times the same trajectory back and forth, one times to go, one to come back, one to go, one to come back. And it's the perturbations around this four-time bounce that gives you the commutator. If you want larger h-bars, then of course. But my question is slightly different. So for any Hamiltonian system, the construction works. The construction works, what you're not guaranteed is that you do have a Lyapunov exponent. This commutator could give you zero. Now, I like to think of quantum mechanics as classical mechanics. As long as you have a finite dimensional Hilbert's phase, then you have a Hamiltonian flow on complex projective space. And so you can translate all of the equations into the language of Hamiltonian mechanics. It's a very particular type of Hamiltonian mechanics. So this makes me wonder whether if I used the purely classical language consistently, I would arrive at the same construction. Possibly, but somewhere h-bar has to creep in and tell you, listen, I'm not allowing you to have the Lyapunov that you want. The h-bar would creep in when you start talking about temperature, I think. Because I'm only interested in the evolution of the state's modular phasing, but I'm not concerned with their phases. Okay. But, sorry, so I'm a bit confused because I thought that of course in quantum mechanics, if you look at the evolution of the state, you never see chaos, obviously, because the evolution equation is the Schrodinger equation, which is linear. Yeah. But I think that this is why I think that as physicists, the nicest construction is the Loschmiedt construction, which you understand, Paris. And then I'm not sure I understand your point. Will there, maybe I'm missing a... Well, that's remarkable, isn't it? It may actually mean that you can't get using the Hamiltonian language to this picture of Loschmiedt. You will do that, okay? Come on, Sam. Again? Is it possible to see that... The Yaponoff exponential growth is asymptotic, but is it possible to see in these systems whether one can actually get to a time scale where the Yaponoff growth exists? Whether one is dominated by situations where the system is complex, but one never gets to this in particular. Well, this is something that we should be able to provide one day, I mean, calculating it and see. I think that there's going to be, there is every reason why you can throw the derivative and do part of the same. Only that, it's not exactly the same, the correlations are bosonic and so on. But you do have things that relax more and more slowly. And the trick of throwing a time derivative away is in glass theory, it's 30 years old and we know a lot and we have terrible gaps also in this knowledge. But it is something that is important for us because it tells us something very physical about glasses which I can tell you some other day. Okay, again?