 It works. No, but you're going to do it. Ah, that's OK. Wow, wow, wow. No, it's OK. I like both. Weird? Uh-huh. First of all, I should. But this is not my slide. You were right. This pointer is weird. I mean, we can do it also without pointer. It's OK. This is getting too long. Yeah, yeah, yeah. I think it's fine. OK. So I think we are ready for the last talk of the day, who is given by Sylvia Papallardi, who is junior professor at Cologne University. And she will go deeper into these free probability issues and explain how this is connected to notions like eigenstate thermalization hypothesis and equilibration in quantum systems. So please, Sylvia. OK. So first of all, thanks a lot for the organizers to be so brave to put a quantum session in a machine learning conference. And to our chair, Valentina Ross was behind that. So today, I'm going to talk about two works that I've been doing with collaborators in Paris, Laura Fainé and Jorge Curchan, and Felix Fritz and Tomas Prosane in Ljubljana, in which we actually found out this free probability, which is this beautiful framework that Ludwig just explained us, applies to the eigenstate thermalization hypothesis. So what is this strange hypothesis that we are interested in? Well, you can think about it in two ways. So either our way to understand quantum statistical mechanics and equilibration, which is what we like to think about, or structured random matrices, which I believe might resonate with you more. So in the first part of the talk, it's going to be more on the physics and why I'm saying the structured random matrices actually matter for the eigenstate thermalization hypothesis. I'm going to present a toy model, which is like an hyper-simplified version just of full random matrices, which hopefully will be quite clear to you. And then I'm going to tell you what's the point of my talk, which is the general version of this ETH, the eigenstate thermalization hypothesis, needs correlations. And I believe that all of you here are quite interested in correlations, and what we found that we were quite surprised and happy about is that free probability is actually a framework which allows to simplify immensely the description of correlation in random matrices and in ETH. So this is basically what I'm going to do. Okay, so let's introduce quantum dynamics. So since it's late, it's going to be a cartoon. So the standard problem that one is interested in is an out-of-equilibrium problem in which one has an out-of-equilibrium initial state, like a drop of ink in a glass of water, that then evolves in isolation from the environment up to a stationary value, which is described by standard statistical mechanics. So in the quantum problem, the idea is exactly the same while your initial state is a pure wave function psi, like some bunch of spin polarized, and then the evolution in isolation from the environment is actually the Schrodinger equation determined by the Hamiltonian of the system, which will bring the system out of equilibrium, developing some local correlations, and then you have a stationary state. Now, in the past 20, 30 years, it's been clear that to understand this out-of-equilibrium relaxation process, you should focus on local observables. So like the density of the blue particles, an example that I was showing you before. And when I say local, I imagine some local density and some observable, some operator, which is defined only on the local support of the whole Hilbert space. So the typical protocol that one has in mind, one computes the expectation value of time t of this operator in the initial wave function. This will have an initial state, then will undergo some oscillation and attain a value which we would like to describe with standard statistical mechanics. So the Eigenstate thermalization hypothesis is somehow a way to describe this process, and actually this was what driven, what really was the main motivation for introducing it, but also we would like to have a theory which allows us to describe also dynamics at equilibrium, so after the state is already equilibrated, and this is what this talk is going to talk about. So I'm not going to talk about the way I approach equilibrium, but just how to describe equilibrium in a many-body system. Okay, so this is the goal, and by saying that I want to study dynamics of local observable, basically, no, before that, I wanted to say that this issue of studying thermalization is important not only from the fundamental perspective, but also because in recent years the field of quantum technologies and quantum simulation has really received an unprecedented boost due to the developments in technologies, and this has led now to probe unprecedented timescales, quantum evolution in a different sort of platforms, and these developments really made these questions, like how unitarity arises, how equilibrium arises from unital evolution, or how to describe quantum states out of equilibrium are really important and pressing matter, not only for, let's say, fundamental questions. Okay, so to describe this issue, as we were saying, we want to look at some observable, at time t, so this is the old Heisenberg picture, so the Hamiltonian, basically, is selecting the eigenbasis of which you want to look at your observable. So as you see in this Heisenberg picture there are two elements, so there is the oscillation given by the spectrum of the Hamiltonian, and then there are these matrix elements of the observable that you're considering in the energy eigenbasis. So throughout all this talk, and the essence of ETH is how to describe these matrix elements of a certain operator in a statistical way to retrieve quantum statistical mechanics. Okay, so this is the object, and now let's simplify it and tell you why you can think about it as a random matrix. So the idea, this toy model, this oversimplified version, is the amount in basically thinking about a diagonal, as Sean was saying before, so a diagonal operator looked in the basis of some random basis. So here we just like the difference from what Ludwig said, the dimension of my matrix is d. I don't have to be confused with n. So I have a d by d matrix, and what I'm looking at are the matrix elements in an eigenvector of a unitary random matrix. So what I can do now, this i and j, they could respond to position in this matrix. I can take averages over the unitary random matrix and due to similar reasons to the last slide of Ludwig, basically what we find out is that the diagonal part of this matrix has an expectation value, which is all the same over all the different matrix elements. It just depends just on the eigenvalues of the operator, and I'm going to call it kappa one. Then if I compute the expectation value of a matrix element with two different indices, this typically vanishes. But if I compute the expectation value of products of two on a loop, then this will have a finite expectation value. And this expectation value is given basically by the variance of the operator over the size of the matrix. Okay, so this is a first approximation and you can recombine these information in this toy ansatz, which basically tells you that you can rewrite your matrix as a diagonal part, which is going to be given by this kappa one, and a non-diagonal fluctuating part, which has an average zero such that if I compute for i different from j, I obtain zero if I compute the average, and then it has a variance one such that I retrieve that the fluctuations, the first order fluctuations of these matrix are exactly kappa two over d. So this is the kind of ansatz we're going to go for when describing ETH, but it's just a way to encode this kind of information, the first and the second moment of these matrix elements. But now you see that by writing it, yes? Yes, in this toy model, yes, because here I'm taking a random basis, so there is no structure. Also, so basically k2 is the variance of the random matrix, so it's just the expectation value of a squared minus a squared. So in this case, they only depend on the eigenvalues of the matrix. Okay, so at this point, there is no information about correlations, but actually this toy model host also correlation. So let me represent them pictorially what I just said in the following in this way. So I want to put the elements of my matrix on this loop, and I will indicate them by dots. And then my matrices, they live on edges which connect the more dots. So clearly in the case in which I have only one dot, here I'm taking just the diagonal expectation value aii, but here in which I have two dots, I have iij and aji. Okay, so here, and when I indicate them with blue, I mean that the indices are different. So what I said up to now is just that this first loop is kappa1, the second loop with two indices is kappa2 over d. Okay, so if I continue this exercise, I will discover that I can consider the expectation value of many products of different matrix elements and these simple loops, they will contribute with a combination of moments exactly like kappa2 and kappa1 of a different order. Now it's a kappan divided by the size of the matrix to the power n minus one. So this is just a standard scaling and then it can be also proved that this connected correlation function here, kappa, are actually the three cumulants. I'm going to go back on this, but this was actually proved in a paper about inference. So just to say that all these questions are quite interconnected. Everything I'm going to say is in the limit d of large d. Yes, that is good. So basically what I wanted to say is that if we take full random matrices, we can describe the expectation value of the matrix elements in random basis and that they account also for correlations. And this can be summarized in ansatz which can regard only the matrix aij. Now from full random matrices, which is something quite standard, we want to introduce information about the model that we're studying in such a way to account for equilibrium statistical mechanics. And this is somehow what goes under the name of eth. So we want to study many body systems described by Hamiltonian eth in which the eigenvalues and the eigenvectors over which we are computing the expectation value are defined in this way. So the typical example that we have in mind is a many body problem with large degrees of freedom like a spin chain. So we saw before the Ising model, here you can consider an interacting quantum systems that we used to say degenerate. So in absence of conservation quantities such that the eigenvalues are typically non-degenerate. So a very big part of quantum chaos starting from the 50s was recognizing that eigenvalues of Hamiltonians actually share the same properties as eigenvalues of random matrices. And now what we want to look at is the expectation value of operators. So it's not an information on the eigenvalues, it's rather an information about the statistics of the eigenvectors. So let me now, since here we are in high dimensions, let me tell you what are my high dimensions. So just to justify why I'm here. So we have a system with large degrees of freedom. So L is going to be here, my number of sites, so they have to be large. The size of the matrix. So my large, I have a larger Hilbert space in the sense that this scales exponentially with the size of the system and the numbers of degrees of freedom. So when I say exponentially, it's more exponentially large in the matrix size, it's always polynomially. So this is often an misunderstanding among different communities. And then what's very important for us is that we actually have a density of states at energy E. So this quantity, the number of states at a certain energy, define let's say the entropy of the system and these are finite energy density scales exponentially also with the system size, but with a rate which is not flat, like in the case of the full random matrices. So here it depends on energy. So these are all the ways that we are in high dimensions and in particular the dimension of the Hilbert space but also because we have a very large density of states. These are very big matrices, but... So the idea of the eigenstate thermalization hypothesis is to substitute this full random matrix with a structured random matrix which takes into account energy, the density of states and the energy differences. So this was fully established by Strahnitsky in 1999 in a paper which is very beautiful to read if you're interested in the topic but accurate also despite it was some years ago. So the idea exactly in the same spirit as the term model I showed you before is that my matrix elements now they're going to be given by a smooth function that now depends on the average energy between two points in this matrix. So this E plus is always the sum divided by two. But now I also have another scale here which is the difference between two energies because in this Hamiltonian case my spectrum is ordered. So exactly in the same spirit here I have a fluctuating pseudo random number. So now I don't have any more real source of disorder because once I have my Hamiltonian I diagonalize it and there is no external source of disorder but actually these matrix elements behave like pseudo random numbers. So I can think about this quantity as zero average and unit variance and then I will have that the diagonal part will correspond to the micro canonical expectation function which is a function of the average energy and then there is enough diagonal function that now you remember in the case before it was 1 over D while in this case is 1 over the density of states so it's exponentially small with the entropy of the system as energy E and then there is also this smooth function F of omega which is somehow depending both on the energy and on the frequency and this is to say that the matrix would typically have like a banded structure because you can imagine that the matrix element between energies which are quite different they should be small so somehow this F function they should decay at large frequency. Okay, so let me show you design state thermalization hypothesis has been proved to be extremely successful so let me just show you two examples the diagonal part actually encodes the statistical average of observables so now if we want to compute the thermal average of my observable at temperature beta while this in the presence of equivalence of ensemble is given by the expectation value micro canonical expectation value at the same energy density which in turn is encoded in a single eigenstate due to the structure of the eigenstate thermalization hypothesis so this is why we usually say all thermodynamics is an eigenstate but this actually holds for single local expectation values so what's nice is that now you can check this thing in examples like the one I showed you before so if now in that ising spin chain I take the sum of local of local magnetizations and then I put it on my computer I can diagonalize the model and I will see that this expectation value somehow goes to a smooth function by increasing system size and it's a smooth function of the energy density so here with L I'm increasing system size and you see that the fluctuations they're becoming smaller and smaller so this is the eigenstate EtH for the diagonal part but also the half diagonal part is very important and actually it encodes all two-point diagonal correlation functions so imagine now that I am at equilibrium and I want to compute the correlation between an observable at time 0 and observable at time t and I want to compute the connected correlation well if you do the calculation with EtH you will find that this is given exactly by the smooth function appearing in EtH answers and also in this case you can retrieve it numerically so by simply diagonalizing this big matrix and looking at this half diagonal expectation value and you see that basically they have a smooth function and as I was saying to you they decay at large frequencies so this is very important that this kappa 2 of omega now differently from the example of a full random matrix now they decay with frequencies so they have this structure and it's also important because at small frequencies these quantities therefore they encode all the transport behavior of your physical system now you see also that here there is not only the smooth function there is also a thermal product and it's actually to encode the fluctuation dissipation theorem for who is passionate about it so it encodes everything basically all statistical mechanics up to two point function okay so with this this is already quite nice because what we saw is that by including dependence on the energy density and the frequency in these answers we achieved we were able to describe expectation values and dynamical correlation functions now we would like to know more and in particular we would like to know something about multi-point correlation functions and so this could be the case so I guess this question is not important only for us but it's in general important once you have a data set and you want to know how much different your data set are correlated it is important to be able to go beyond just two point functions so I would say that in the many body community this question was was motivated a lot by this suggestion of looking at this specific form of correlation function so I'm going to present it just because then I show you the numerical examples of this it's not very important but basically from my energy physics I suggested to look at these correlation functions with these strange time-ordering as a probe of many body chaos because they encode Lyapunov exponent and as such and so then people started wondering okay how can I describe these OTOC these out-of-time-ordered correlators via diagonal set thermalization hypothesis so the first thing is that if I simply take the answers I showed you before which only describe up to two-point function I'm doing something wrong and I cannot get all the physics it would be like saying if I assume absence of correlations between my matrix elements all the moments of higher order would just be determined by two-point function which is quite too much to ask in physics it would be to ask to the word to be Gaussian all the time so indeed if you compute in this toy model in this pin chain that I showed you before you can see so this four-point function has a function of time and you compare it in blue with the Gaussian result you see that there is a huge discrepancy which is not improving by increasing the system size so something is missing and the something that is missing was introduced in this paper by Laura Fain and Jorge Cuscan in 2019 and it is really in the same spirit of including higher order correlation in the toy model I described you before so the idea is to consider correlation between a matrix elements which lie on a loop of this form and exactly as in the case before we had the size of the matrix at the denominator with this n-1 power here we will have the density of states and now what's important also is that this function that before was flat and depending only on the eigenvalues of the matrix now it will depend also on all the energy differences of the system so this is what was introduced by Fain and Cuscan and this is the prediction when all the matrix elements are different so remembering these loops I have to think about when they're blue and separated the matrix the indices are all different because when the indices repeat based on some entropic arguments I can also say that the expectation I have to factorize so this is the Fain and Cuscan recipe with the goal in mind that what you want to describe at the end are correlation functions between observables at different times so in fact why those kind of correlations on these loops matter because at the end if you want to compute an expectation value what you have to do is to sum over all the possible indices of matrix elements on a loop so this is what comes when you put inside the energy eigenbasis so you're just rewriting a product of matrices so here I'm just representing pictorially now time over these edges connecting to matrix elements now there is another subtlety that here the expectation value that we are taking is a thermal expectation value so I have to decide where I'm putting the temperature so this is not very important so this is the computation that I have to do I have to sum over all the possible loops and clearly in this now when I have this formula what I have to do I have to start considering all the different ways these indices can be to respect to each other so I have to consider all the possible contractions and when I start considering all the possible contractions I end up in the following situation I have a situation in which all the indices are different so this is the blue dots then I have situations in which for instance the index i can be equal to the index k so here I'm putting them in the same block then I have situations in which three of the indices are of the same form and I put them on the same block so on and so forth in this case all the indices are the same and I put them in and then I realize as Ludwig showed from four on I can also have contractions that after I arrange these indices on a loop this this contraction cross so basically from this you realize that there are three types of diagrams coming from contracting different matrix elements which are on a loop that have the same for other are simple loops so they are all different diagrams in the sense that I can group indices in blocks that do not overlap and then I have crossing diagrams at this point there is no eth this is just an index bookkeeping, let's say pictorial index bookkeeping but now I want to associate to each diagram an eth ansatz so an eth the smooth function appearing in eth and the entropic counting so for instance as we said the simple loop will contribute with the smooth function of the other four and then in this case for instance I will have aii so I put the diagonal part and then I have the diagonal for iijij and then so on and so forth ok so this is a way to associate to these different indices the eth content and then I can use eth so eth was telling me that basically these matrix elements there is a concentration of measure so these matrix elements once I sum over them I can substitute what I would have had if I was averaging and so once I do that and I plug the two ansatz that I showed you before I have two important outcomes so the first is that actually crossing diagrams which are the one corresponding to the last diagram I showed you are exponentially suppressed in the system size or are suppressed in the size of the matrix ok so they basically do not count and then I have that non-crossing partitions factorize so the rules first a question about the identification of free cumulants with this and in our case it's important that you actually have factorization of non-crossing partitions and this comes from solving these kind of problems through saddle point or larger models ok so once we actually plug in these two conditions we will see free probability popping out so these conditions actually are things that also you can check on your computer so they're not just random matrix estimates but once you take the expectation value of a magnetization then you look at how these crossing partitions scale you find that they obey to our predictions so here in red in the toy model I showed you before is this crossing partition which decays with the inverse of the Hilbert space size and while these are it's basically the ratio between the crossing diagram and the factorize one and it's clearly going to zero the difference from one so this is just to say that in physical models actually this assumption hold quite well we're doing with time ok good so since Ludwig did such a great job in introducing free probability I should probably skip this just to say that if you want to know more there is a blog which is aimed at freeing probability from its commutative chains which has all the possible resources that you could be interested in so there are many lecture notes books etc so so what really we need for what I'm going to say is this concept of non-crossing partition that Ludwig introduced which are basically partitions where blocks do not cross it's quite simple and they pop out from n equal to 4 on and what we're interested in and what's important for us are these free cumulants which are defined implicitly from this moment cumulant formula Ludwig already introduced it I can skip it but it's a way so you have to think about a way to define implicitly connected correlation functions in terms of moments so it's an implicit definition in terms of moments of equal and lower order but it's done in such a way that for instance for Gaussian random matrices the free cumulants vanish so it's exactly the same thing as the standard cumulants from q greater than 2 the standard cumulants of Gaussian random variable vanish in this case the free cumulant of Gaussian random matrices vanish from order greater than 2 okay so what you might have realized is that now these ETH diagrams are exactly the non-crossing partitions are exactly related to the non-crossing partitions coming in free probability so once we want to do the computation of our moments we don't have to do all the combinatorics again but all these calculations we are already done in free probability okay so this leads us somehow to define a thermal moment free cumulant expansion so to define thermal free cumulants now again I want to stress that this is just a definition of a combination of moments but for us in ETH we can add also another information which is what ETH is saying on free cumulants is that actually thanks to ETH free cumulants are given just by these simple loops so are given just to summation with different indices and this allows to re-express these free cumulants directly in terms of the Fourier transform of the smooth function appearing in ETH answers where now this is a generalization of the fluctuation dissipation theorem which basically just comes from the fact that the measure is not trace invariant so there is a point in which you are putting temperature going back to your question whether the risk was relevant so this is our our identification you want to compute higher order correlations and then you realize that free probability that the contribution that matters for this higher order correlations have exactly the same diagrams that pop out in free probability so then it's the unnatural object to study are these free cumulants so they are the proper cumulants that somehow have the proper scaling and we say that in our case these are given by summations on different indices so by these simple loops so in the example that I was showing you before now if so you remember we had the orange line which was the fourth, the OTSC so the moment at different times and then you add the comparison with the Gaussian result in blue, now if you add the free cumulant you see that the summation of the two gives you the full result so here in this free cumulant I am computing the ETH free cumulant so this is really the summation only over different indices and in this case we retrieve the full result so we can identify and pinpoint the role of correlations exactly in this free cumulant so this is also something else that I would like to say to contrast the toy model I did before with what happens in physical systems so what's very important what we find numerically is that these free cumulants have a very strong dependence on frequencies so if you put them in your computer also at higher order not only at the second order these free cumulants decay to zero at large frequencies almost exponentially at a small frequency they have all sorts of different behavior and this as I was saying has to be contrasted with the toy model for flat random matrices so once you have these flat random matrices you can compute the free cumulant in frequency which is going to be given by the free cumulants only of the matrix which as we were saying depends only on the eigenvalues of the matrix and then a quantity which is a four point spectral correlator but this spectral correlator is usually flat for frequencies over their one so the actual result from the toy models which are at the end the rotational invariant random matrix models is that free cumulants are flat as a function of frequency while in ETH all the physics is given by the fact that this free cumulants actually depends on the difference between the energies we are considering ok then free probability allows us also to say many interesting things that maybe are not very interesting to you but it's just to say that free probability comes with a set of tools and generating function and different sorts of transforms that help you in computing distribution of eigenvalues or moments etc and so in our case also in which we want to compute the thermal expectation values of the chemical correlation functions free probability gives us tools that can be used to compute these things ok with this I am almost done ok let me conclude so I hope I convinced that if you want to study quantum statistical mechanics what you can replace in your mind is a structured random matrix where all the physics is inside this fat diagonal of the random matrix so it's not really a diagonal ok so it's inside these correlations and then that you need correlations to describe physics beyond two-point functions this is obvious but every time you are beyond a linear response and to do that it's quite simple it's much simpler to use to use free cumulants and to use free probability now there are a series of open questions so some of these open questions regards the physics encoded so if there are a hierarchy among them how this picture changes in different systems there is a very interesting question so the question that was asked before about what happens about subliving corrections so we have been trying to think a little bit about it we are not sure that this picture it's the simplest one to describe lower order corrections but it would be interesting to study that and then, I mean when we were finishing this work we discovered by chance that Ludwig and Denis were working on free probability in many body systems and so it was discovered basically independently that it was really simplifying things so the question that is really important to us is that how general is the use of free probability for many body physics and we are both working on several different applications so at this level it seems that the math is the correct one because we are dealing with non-commutative object it's not clear what's new we are going to learn I think that so far this is a mathematical framework but it's not clear what's the physics that we are approaching so with this I thank you for the attention up to 4pm now we still haven't looked at it but it would be but usually I mean if you have one conservation law everything holds inside the sector of that conservation law so now yeah that's something probably in that case because I said that for us basically this is ETH so it's saying that these three cumulants are some mission over different energies in integrable systems since energy is not the only conserved quantity but you have all the the conserved but why yeah because basically I mean the first thing is that in this process so in this process so you want to describe this process okay and to do it once you so once you rewrite it in the energy again basis okay you have these matrix elements popping out do I have this it's difficult to give an answer I give a physical intuition on a matrix element so the idea is that once you expand it in the basis of the energy okay so what you will have is the summation of i j the overlap of the initial state with the eigenvector i the overlap with the eigenvector j and then you have the matrix element and e to the i and then what you know is that sorry okay but you understand so what you know is that these observables obey this picture so they have some oscillation and then they saturate to some point so this must mean something about the fluctuations of this matrix element so basically it says that these are very well picked around the stationary value identified by the initial energy so this is the first thing that you do you first rewrite it as a sum of so when i is the same so i equal to j here you have so here you have your diagonal expectation value which is independent of time so this will be the stationary value that you obtain and then here you have the oscillating part then when you take the average over time this oscillating part since these are not the general it will vanish so basically this is the stationary value so doing assumption on how this matrix elements behave it's saying something on how things approach equilibrium so basically the idea is that the diagonal part it's much bigger than the off-diagonal part the off-diagonal part matter when you compute the dynamical correlations yeah so at this level yeah so at this level this theory is somehow a bit too general to be this slide so at this level this is just so i'm not saying anything about the structure of this function as a function of energy so this smooth function a the micro canonical expectation value i'm not saying anything but it has to be a smooth function of the energy density actually so not only on the extensive energy but of the energy density so it could be whatever it wants to be so that depends on the microscopic microscopic properties of the Hamiltonian so this is just let's say a structure so the structure and the scaling with the system size or with the density of states that this matrices shall have an off-diagonal function that encodes all dynamical correlators also of this i'm not saying anything so then also the shape of this function in frequency will depend on if the system is diffusive if the system in this case so if the system has conservation laws etc so all the physics is encoded in the specific structure of these correlations and the th is just saying how this structure of the statistical properties of the matrix that encodes them yeah i mean the function f is very hard to determine analytically so this is a very so i'm just putting what are the building blocks of the theory i'm not telling you how to compute them sorry just to go back on this so for instance Ludwig in his example so Ludwig has a different problem because he doesn't compute things as a function of different times but in space for him what for me is that you want you to is position one position two but in his model which is a free model so you can you can do calculations analytically and they are able to write explicitly what's the structure of this of this frequency for us i'm just saying that they will matter for your dynamics and then i'm not saying anything about how they're done and understanding them analytically i mean you can do it in some solvable models and that's definitely interesting but there is no general