 the slides. Yes, we do. I do. Okay, and the full screen. Okay. So thank you, Florian. And of course, thank you also to Jean and Mattel for setting up this conference in the online version. So what I'd like to briefly talk about in here in these 15 minutes is work that I've been doing also in collaboration with Giulio Biroli and Chiarca Marotta. And it has to do with understanding dynamics in a high dimensional setting as several of the previous talks. But here the perspective that we take is a little bit different. And at the moment it's more physically motivated, if you want, in the sense that we are interested in the so-called regime of activation of the dynamics that I will introduce in a minute. So let me go to specifying the setting that is a little bit sketched in the figure. So the idea is that we have a landscape or a functional E that is defined here at each point S of a configuration space that is huge and very high dimensional. So I will call capital N the dimension of this space. And this is the large parameter of whatever calculation we refer to. And this functional is associated to a system whose time evolution we can picture as the motion of some point along the surface of the functional, which attempts to optimize it. So to reach an equilibrium configuration that is identified, for instance, with a global minimum of this landscape. And in particular, in the following, I will think about the range of dynamics. So this is essentially gradient descent, which is biased towards configuration where the values of the landscape is smaller and smaller, plus some noise that I will assume to be weak. And of course, the very general question is how does this dynamics allow you to explore such a functional? And this is a question that is of relevance in several contexts. So we may think, of course, to applications in the context of inference and optimization, where we want to minimize cost or loss functionals. But we can also think about complex systems in physics, so in particular, glassy systems or problems in biology, where you want to optimize fitness functions, or even problems in quantum computing. And in all of this setting, indeed, this high dimensionality emerges quite naturally. And so from now on, what I will do is to assume that this complicated landscape can be modeled in terms of some random function with a different statistics, that is, for instance, Gaussian, as was done in some previous talk. And I will stick to the terminology of glasses. So I will refer to this as an energy landscape. And I will refer to the strength of the noise simply as the temperature of the system that, as I mentioned, is assumed to be small. So what is it that makes this type of dynamics particularly challenging to analyze as we go to high dimension? Well, I would say that the main reason is that as we make, as we increase the dimension of this underlying configuration space, what we find is that typically these random functionals tend to become increasingly more and more complex themselves, in the sense that they tend to be very non-convex. So to have a huge number of local minima, maxima, sadas, which are stationary points where this gradient term is exactly equal to zero. And this number scales as large as an exponential in the dimension of configuration space. So you are trying to optimize a landscape which has an exponentially large number of local minima, which are suboptimal configuration with respect to the global minimum, of course, but which are locally stable and therefore attractors for this type of land of un-dynamics. And what you need to do is to be able to classify all of this local minima and also the sadas in the vicinity, because these are the configurations through which the system can escape from a given minimum and jump somewhere else in this underlying configuration space. And typically what you find is that these sadas are at values of these functionals, which is extensively higher with respect to the one of the minima. So this is to say that the barriers that you typically have to cross in such random settings are extensively scaling with the dimension of configuration space. So in this type of setting, the dynamics that you expect is of the activated type. So you expect that your dynamics relaxes to some given local minimum. It remains trapped in there for very large times and then only very rarely it is able to escape from this local minima, crossing this very large energy barriers and to go somewhere else in this configuration space. And these jump processes are extremely rare. So they require time scales that when temperatures are small are essentially a renew slide. So they are exponentially large in the barrier that you have to cross and therefore in the dimension of configuration space itself. And this is an ingredient that makes this particularly challenging to analyze. Well, of course, if you want to simulate these dynamics because you have to wait a lot of time, but also in the analytics because it rules out one limit that is very natural to take that is the standard field. If you want, when you take the dimension of configuration space goes going to infinity before looking at the large time limit. So if you do this in here, you would miss this type of transition processes because you are simply sending barriers to infinity. And so you are trapped into local minimum forever. Now, there is a third source of difficulty that I wanted to mention, which is related to the fact that to understand this type of jumps between a local minima, you really need to have a good control of what is the local geometry of your running functional. So for instance, if you are trapped in this particular local minimum of the landscape, you need to understand which ones among the exponentially many stationary points. So for instance, among the exponentially many saddles that you have distributed in your functional, which ones are actually close to this particular minimum and connected to it. By which I mean that at least some of the negative directions of negative curvature of this landscape, which give you the instability of the saddles, they have to somehow point towards this minimum. And this is telling you that the system can use these saddles to escape from this particular minimum. And now it turns out that if you select a minimum and then you ask, where is the majority of all the other stationary points, you typically find that they are at very large distance and they are not connected to this one. And this is just a feature of the fact that your space is very high dimensional. So it comes from the fact that if you increase the linear distance from this particular point, the portion of configuration space, which is associated to this linear distance typically is increasing exponentially in the size of configuration space. And so most of your volume will be very far away. And there is where you will find most of the other saddles. So to find those which are actually connected to this particular minimum, let's say you have to explicitly enforce to be in this region of configuration space, which technically means that you have to do large deviation calculations in which you force and impose constraint on the geometry of your landscape and on some locality feature. Now let me just add a further comment on this idea of activity dynamics, which also connects to the previous talk by Pierre Francesco. And the comment is that of course we know how to describe dynamics in high dimension and we know how to do this through a dynamical input theory. So typically this goes through writing down a partition function for your dynamics, which contains an action or a weight, which gives you what is the probability of a certain sets of dynamical trajectories. This action will depend on the order parameters of your dynamics that are in this case usually two point functions like correlation functions or response function. And then what you do is to impose that the action is stationary and this gives you equations for these two point functions, which are equations which describe typical dynamical trajectories. And indeed, if you plug the solution into the action, you find that the action is zero and therefore the probability for these trajectories is of order one in this capital N. So this means that if your noise is weak, the trajectories that you are describing are essentially given by gradient descent. So there will be relaxation path in these landscapes where you go downhill following the gradient of the function, which are indeed the good equations to describe the dynamics over times, which can actually be very large, but which are not scaling extensively with the other large parameter you have here, which is the dimension of configuration space. And so instead what we want to do when we want to describe activated dynamics is precisely to go to much larger time scales. And indeed, we want to describe trajectories that are actually rare. So we want realizations of the noise, which are not typical, but they are so atypical and large that allow you to beat this gradient descent and to go uphill in the landscape from a minimum up to some given saddle, which is an extensively higher energy. And getting generic equations for this type of fluctuation path, which go uphill in the landscape is actually very challenging. And this is also a big open problem in the field of glasses, which is to understand activation and to go beyond dynamics in the main field setting. So what is the strategy that then we want to follow? So I think this is a bit hinted in the subtitle of the talk. And it is to use what we know how to describe, which are these descending path relaxation path in the landscape to gain some information on the activation path that you need to describe this type of jump processes. And this is a strategy that I believe is best understood if we go back for a minute to a setting in which we know how to characterize activated dynamics, which is the limit in which you are in very low dimensions. So assume that you are, for instance, in one dimension, you have Langevin with very small temperature and a potential that is non-monotonic with some metastable local minimum and the equilibrium minimum, which are separated by this local maximum. Now, in this setting, we know how to describe instantons, which are the most probable trajectories that bring you from the metastable state to the equilibrium one. And in particular, we know that the fluctuation path, which is the red one, which goes uphill, can be simply obtained somehow by flipping your potential. So this will be a solution of the dynamical equation in the reversed potential where you map a maximum to minimum. And if you use time reversal properties of these Langevin dynamics, you can actually show that this path is nothing but the time reversal of the relaxation path, which goes from this local maximum down to this minimum. So in order to build these instantons, what you can do is to compute the relaxation path, two of them from this particular local maximum, and then your time reverse at the first one and attach to it the relaxation to the equilibrium minimum. So what we want to do in here is something similar, but we want to export this in such a high-dimensional setting, and here you immediately understand what is the difficulty that comes from the complexity of the landscape. So it's like trying to solve a double well problem, but now you have exponentially many possible wells and exponentially many possible saddles that connect the different wells, and you need to understand which local minima are connected to each other in configuration space, and by which saddles what is the distribution of the corresponding barriers, and so on. So you somehow have to understand what are the dynamical paths which are allowed to your system in such a random landscape. So this is the program, what we want to do, and we do this for a simple model that has already been introduced in some previous talks, which is the spherical spin. So this is a model in which your configuration space is chosen to be a sphere in very large dimensions, so configurations are point on the surface of the sphere, the distance between them is just measured in terms of the overlap or the scalar product between the vectors, and your landscape is a Gaussian monomial, so it's an expansion, it contains this coupling J, which are independent Gaussian variables, and this means that this functional has a zero average and has a given covariance that is as you see given in here. So what we want to do for this landscape is essentially what I sketched before, so the first thing is to characterize the local geometry, so we select one arbitrary local minimum of this functional among the exponentially many that we have, and we compute what is the distribution of the saddles that are connected to this local minimum, so which I have as I say this downhill direction that goes down to the local minimum, so we compute how many they are, what is the distribution in height or in energy, and also in terms of geometry, so how distant they are from this particular minimum, and once we have this information we can write down dynamical equations conditioned to start from one of these saddles as an initial condition, by solving this equation we get the relaxation path which go down to this particular minimum and which go on the other side if you want, so to some other local minimum of the landscape, and then we use time reversal of this relaxation path to build these instantonic solutions of the dynamics. So just in the final two minutes let me just flash a few details on each of these points, so starting from the geometry, so as I said we have some reference local minimum that here I call S1, and then we scan the landscape as a function of the distance to the minimum or the overlap, so if we decrease this overlap we are essentially increasing the distance in configuration space, and we ask how many stationary points we find at a given distance and what are their energies and what is their stability, and this corresponds to this plotting here, so here you have distance versus energy, the color the region correspond to the spectra of these stationary points, and you see that you have different colors which encode the stability, so the stability is given by the statistics of the Hessian matrices of the landscape at the stationary points, and so what you see is that there is a region here that is sufficiently close to the minimum where you find that that typical stationary points are subtle, so they have a spectrum that is positively supported but then you have one single eigenvalue that is negative, so you have one unstable direction in configuration space, so this gives the distribution of this let's say escape configuration that we were looking for, and once we know how to pinpoint them in the landscape we can then write down dynamical equations which are a modified version of the usual dynamical mean field equations, modifying the sense that we have now to enforce the dynamics to start exactly from one of these saddles at fixed distance from the reference minimum as an initial condition, and this turns out into a bunch of let's say boundary terms which encode this information on the initial condition, so in the first line you may recognize the usual equation for the correlation function of the p-spin, and then we have all of this accept term that gives the boundary condition, we solve this equation as we say we find two solutions, one which goes back to the reference minimum and one which reaches some other local minimum in the landscape, and then we use time reversal, so we time reverse the solution which relaxes to the reference minimum and in such a way we get a fluctuation path which goes uphill to the saddle and then we join to it the relaxation to the second minimum and this will give us an example of an instantonic solution which now is not given in terms of the trajectory itself but in terms of this global order parameters in this high-dimensional dynamics which are for instance the two point function, so here is an example you may see that there are various plateaus, so for instance this corresponds to being in the first reference minimum and then you escape, you converge to the saddle, you stay there for a while and then eventually you escape from the saddle and you go to this other minimum which is connected to the first one. Okay so this is basically the idea of what I wanted to present, so the general framework is that you are looking at this Langevan dynamics in high dimensional random functionals, these functionals are typically very non-convex, they have all these isolated local minima separated by very large barriers and this induces a sharp separation of time scales in your dynamics, so at short times you will converge to local minima and do small fluctuations around them and only at very large times you will jump from minimum to minimum and eventually explore your landscape. You want to describe this type of jam processes, this you can do in terms of instance using this DMFT and time reversal and the crucial ingredient which you need here is to have this control on the local geometry of the landscape, so basically on where our saddles with respect to local minima and at the perspective here goes all in the direction of generalizing the particular solution that I was showing in the previous slide in several directions. Okay I think that's it on my side.