 Thank you very much. Does it work? Yes, I think. Right, I would like first to thank the organizer for inviting me here. It was a long trip, but coming back to Trieste is always a pressure for me. I want to start this talk thanking my collaborators, Eric Aurel, that's also invited here. Junior Ferrado was his student at the time when we started in Stockholm. And David Machado, which is now my John P.G. student, he was part of his thesis, a laureate thesis, and Edward Domingue at that time he was my PhD student and is now in the Netherlands now. Alright, so this is more or less the online of the talk. I will make an introduction about the general problem we want to address here. Then we are going to introduce to get a sketch of the derivation that we like to call the cavity master equation just because it's a fancy name. You can choose another one. And then I want to present three different applications of this cavity master equation for a model with two-spin interaction. It's a physical model, the dynamics of these simple physical models. It was John's to show that the equation worked. Then we are going to look to more interesting problems like systems with multi-spin interactions and then to this old discussion about metropolis focus search applied to the three SAT problems. And then we are going to present very general conclusions about the kind of result we obtain here. So this is the kind of problem we want to solve. As you can imagine, this is one of the most famous equations in physics. It's known as a master equation. And what you want to get is a fast and interpretable way to solve this kind of problem which you have the probability of a vector of variables that evolve in time following this kind of rule. R here is the transition rate and P is the probability. And what is interesting here is that all the information about the dynamics is introduced in this R principle. That's the kind of problem you want to solve. And to be more concrete, our point is, all right, these sigmas in this case will be discrete variables. Since we are physicists, we want sigma equal one or minus one. And this is kind of inversion operator. And this is very important because it means that you just, at every moment, you just flip one spin locally. And we are going to treat this problem essentially in random graphs. Very diluted graph. That's more or less the idea because why diluted graph? Because essentially we don't have yet very good approximation for this kind of continuous dynamics for discrete spins, right? That's what we want to do. And now we will sketch essentially the derivation of this cavity master equation that starts very easily first writing down what is known as the local master equation. That's something that you get simple. It's very simple. It's in the textbook exercising which you trace over n minus one variables. And instead of having this huge and beautiful equation for all the dynamics of the whole set of variables, you have the dynamics of just one variable, sigma i. And the point here is that now you have many equations, which is not a big deal because you have an equation for each variable. The big problem is here is that it is not a closed equation because you have this p, this joint probability distribution of sigma i and its neighbors. And then the whole problem is how you treat this joint probability distribution to find out a way to solve the equation or to propose a closure that is meaningful and interpretable and give you reasonable results. That's essentially the big problem in the field, right? So this is our approximation. That's the way in which we are going to work. So we take this joint probability distribution and write it as a conditional probability distribution multiplied by one point probability function. And up to there, you can say that you are quite fair. And then we make this approximation. We say that this conditional probability can be factorized. And the intuition at this point, more or less, is the following. So if you have spin i here and you have all your neighbors here, what you say is the conditional probability on this pin, given that this is fixed, is independent of what is happening here, right? All the conditional probability become independent when this is fixed. This is, of course, not necessarily true in the dynamics, but let's say it's an approximation up to now. If you do this, it's very easy to get this equation here. You're just plugging this here. And still you haven't solved anything, because still you have a problem of how to estimate this conditional probability distribution, right? And that's what we are going to do right now, right? So on the top of that, we have all resolved, I mean, all now at 2014, I think, that tried to solve the similar problems, the dynamics of similar problems using a kind of generalization of the belief propagation equation. And the idea is that if instead of taking the value of the spin at a given time, you imagine the whole trajectory of the system. So imagine you have a graph and you have the whole trajectory of the dynamics for each spin in the system. So this x1 is spin up and then down, x2 is spin 2 that is down and then goes up and then up for time and then goes down and et cetera, et cetera. What you get essentially is that you can more or less demonstrate strongly that you find a sort of belief propagation equation for these trajectories. And this is more or less established signs. And these are belief. The point is that these belief connect the history of spin J here with the fact that it's conditional on the history of spin I. This is more or less well known. And inspired by that, what you can do and you can prove, I will say rigorously for physics standards, is that this equation implies this equation for a dynamical problem in continuous time. So from here on the very general assumption that you would say that are rigorous, you can derive this equation here in which you have a sort of cavity probability for spin sin I given the whole history of the variable J, right? And still we haven't solved the problem because we wanted something with sin I is connected with sin J, not with the whole history of sin J up to given in sort of time. On the other hand you have this probability here that again you have the joint probability of sin I and its neighbors given the history of x J, right? If you accept this, then you make another approximation. And this is the stronger one where we can improve in the future, not in this work. It was you say, all right, similar to this kind of intuition there, you can say I will have this expression here. This can be written, no J, and this is, and then what you say is, all right, now I can, now what did I did? All right, this is okay, and now I factorize over this, sorry, like this, and this is what is correct. And then what you say is this piece here, this piece here can be factorized. This is what you get. And then what you say is, all right, this shouldn't depend of x J because if you go to the all plot, this is I, this is J, and this is K. So what's happening in K, if this is fixed, shouldn't depend on J, this history, and this goes away. And this is not an approximation once you accept that you can factorize this, and then the last point is to say, all right, instead of taking the trajectories, I will make an approximation with only the last point, the last state of the variables is relevant, and you get this equation that I am showing you here, which is very nice because again now you can close the set of equation because you have only conditional probabilities, and this is just something that you can in principle numerically solve, all right. So let me summarize. You have this equation that we call the cavity master equation that we can derive up to certain point from rigorous observation and to continue you have to make some kind of approximation that you more or less know how to improve in the future. And once you have solved this, the only thing you need to do to understand, to calculate the actual probability you are usually interested in is to set this solution into this equation, which is the local master equation if you want under this approximation, and then to solve it, all right. All right, now we have this kind of closure. We want to see how it works. And the intuition was all right, let's try to see how it works in the simplest possible model. The simple possible model comes from two-spin interaction. It's the physical modeling in which you want to understand for this kind of Hamiltonian here, right. So what we are going to make is we are going to run a simulation, a Monte Carlo simulation, the dynamics, and we want to compare how our equation describes these dynamics, right. We first try with a Monte Carlo, with a four magnet model, and then we went back to a Vienna brain model which is of course more complex as we will immediately see. And all right, this is the kind of experiments that I would show you a couple of times, so let's take some seconds. This is essentially, we start with the sample, I mean, in very magnetized states, then we make a quench at a given temperature and then we see how it relax, right. And of course, if the temperature is very low and you make this quench at a very low temperature, so the system makes this kind of curve here, you stay at a given magnetization, the one that corresponds with the temperature you are going to work. If you still make a quench at a very high temperature, what you expect is that the system will go to zero magnetization and indeed this is what happened. And of course, near the critical temperature, so the system more or less gets crazy. And we have here three colors, each color is one temperature and points come from Monte Carlo simulation of the mineralization and the line is the cavity master equation. As you see, the picture is quite reasonable but I would say if you look to the error, the local magnetization error, so the average over the error at each pin, this is the kind of plot you get. And right at very high temperature, the error is very small, the maximum is very small times, at very high temperature again, you have this curve and of course near the critical temperature, the error is bigger and then it relaxes at large time, right. So if you now take these peaks here for different temperature, this is the kind of picture you get. So forget now about the green curve but the red curve is the important one, is the error on this maximum error given time and magnetization and what you get is that all right, at high temperature the error is small, at low temperature the error is small, at intermediate temperature the error is large and that's what you expect for a system that has a second order phase transition. All the dynamics here is more complex and of course we cannot describe it very well, all right. This is more or less the kind of picture you get and then we repeat the same experiment that instead of using the ferromagnetic model, we use a Vienna brain model which is essentially the same network but in the link instead of having ferromagnetic interaction you have plus or minus one interaction, the model is much more complex and this is the kind of picture you get. Again, this is the same dynamics, you have a high temperature, you find very pretty result for the error and of course as soon as you enter into the glassy phase everything is a mess, so it doesn't work anymore but nothing works in the glassy phase if you want to study the dynamics of almost nothing works, all right and we say, all right, that's the kind of problem we have glassy phase is hard, let's try to move to a new model that we know that is ferromagnetic but still has a glassy phase and it's interesting by itself this is the multi-spin interaction model is the spin-spin model here in particular we are going to use p equals three I think Lenka more or less talk about the model I will first for those that are not physicists remind you what's the picture of the model this is kind of general picture of a model from a paper from Federico like 20 years ago we're getting old and the picture more or less is the following so forget now about the diagram we are going to talk about essentially for temperature this is a spinodal temperature you have a phase transition there in which appears if you come from a very high temperature in which you have a paramagnet system at some point it appears the ferromagnet solution it becomes stable at this second temperature then there is a kind of dynamical temperature in which metastable state dominates the dynamic of the system you have a third or fourth temperature that is called the Kauffman temperature that we expect is the moment in which this glassy phase dominates also the thermodynamics of the system and now let's try to describe what is in the plot there so this lower branch of the curve is done increasing temperature so you start from the ferromagnetic solution and you start to increase lowly the temperature and what you see is that you are in the ferromagnetic state so you are in the ferromagnetic state all the time so here the ferromagnetic state is stable in this zone here it's still not stable but you're going so slowly that still keeps in the minimum and then at some point which is this spinodal transition you have this transition toward the paramagnetic state and now you start to decrease this is a simulation using if you want Monte Carlo on that picture you imagine you start to decrease the temperature you start from the paramagnetic state going through the paramagnetic state because although the ferromagnetic state is already there it's not stable and what you're expecting is that here you will get a transition and you don't find that so you find that you metastable state start to dominate the dynamics and at some point the system you find that doesn't matter how slow you cool the system it will never reach the ferromagnetic state so the system get trapped in this glassy state and from this point of view although the system have ferromagnetic state that is very nice and very simple to prove the dynamics get trapped in the glassy state and for many years it was considered as a I mean still now it's considered as a kind of paradigm of a system with a very nice ground state that you cannot reach using local stochastic algorithms right so let's try to see what's happened if we apply our dynamics so let's say our set of equation to compare them to dynamical equation so they say to stochastic equation for this kind of model here all right so of course now the model is a bit more complex because instead of having two pair interaction we have a kind of factor graph interaction and we need to re-derive our set of equation but that's if you want a mathematical analysis exercise once you can do it properly so this is essentially the equivalent set of cavity master equation for a system in which the interaction is more general than just by pair all right it's a mess don't worry about it so the dynamics is defined by this transition rate so you have Glauber dynamics here you put hyperbolic tangent if you have metropolis then you put minimum of between 1 and exponential so the dynamics is just defined here all right and this is the plot all right again three points describe Monte Carlo simulation and lines describe the dynamics of the cavity master equation more or less the same we make the same kind of approach at this point we start from a system in the fermonetti state and then we make different quench right so if you quench to a very low temperature you still are in the fermonetti state you find that this purple line is essentially the same you have to distinguish between the cavity master equation and the Monte Carlo simulation and in fact if you look to the error you don't even see it's something purple here that I didn't even remember when I was preparing the talk they are the error and then of course you go to a big higher temperature you start to find the difference it's here bigger temperature you find the difference down the difference and what is interesting here is that this difference here define essentially the speed of the transition temperature of course the error may be big so our dynamics as I said the cavity master equation is faster that goes faster than the actual dynamics of the model above this speed of transition but still what is more interesting here what's surprising us is that independent of that the algorithms reach the same equilibrium solution so the dynamics may be a bit different but on the long term it get the same equilibrium solution ok alright let's try to see if we can describe the equilibrium phase diagram we saw before with this kind of dynamics right and these are the solutions so again points are simulations the lines are the solution of the cavity master equation and this is more or less intuition again you start from a ferromagnetic solution and you start to slowly increase the temperature very slowly for the cavity master equation right it means that you run the dynamics slower and the curve is this one as you see it really much essentially perfectly the dynamics of the Monte Carlo simulation and then you do another thing and now you go to the ferromagnetic solution and you start to cool down the system very slowly right remember that when we started we thought that we would like to find a kind of stochastic solution and when you start to move down when you find it's alright you get perfectly you find all these solutions up to here and at some point it doesn't get stuck anymore in the glassy solution so it goes down and the slower you go the slower it goes down so what in principle you find is that alright this is a dynamics that is wrong it does not describe what is expected to describe but it's wrong in the right way in the sense that now you have a dynamics that allows you to go deep into the glassy phase and that was surprising again in the good way for us because alright now we can sell a new product to the community we have something to go in the right direction and alright it's a bit more tricky than that because actually what he's doing here is not that he's fine actually the ferromagnetic state and this is a non-trivial point because for the PSP model why you have this glassy state the idea of doing this glassy state is that essentially for the system you need to have a plaquette with three spin down three spin up or having a plaquette with two spin down and one spin up when you see the energy of this configuration it's essentially the same because it's the product of the three spin for them it's the same and what's happened when you run at least that's the intuition that when you are running in a local Monte Carlo the dynamics is that all these configurations that are low energy essentially block one to each other so the system doesn't know which configuration to choose what we are doing here is we are doing a dynamics in which you get in principle macroscopically the same energy but you are averaging over all these solutions so in principle if you want to, if you look to the probability of the system it has the same probability of being three spin up or having two spin down and one spin up it's actually the ground state of the system it's not that you find this for a magnetic state but the idea what he's doing is just to averaging over all the possible solution on the ground state of the system and that's something we are going to see later that is again an advantage of our algorithms and at this point we say alright that's very nice we understand what is happening let's try to explore this anyway and now we are going to make the following experiment we are going to sit down at very low temperature and start to fix a fraction of the spin in the for magnetic direction and the question you want to ask is whether our algorithm is better to find the ground state than the usual Monte Carlo this is the kind of question you want to answer now right and these are the results so now on the right side you have Monte Carlo for different size and here you have kinetic Monte Carlo the idea again is the following at very low temperature so in principle you should be in the for magnetic state if you leave the dynamics going out without fixing anything it goes away from the for magnetic state that's something that we know I'm sorry you are not in the for magnetic state at a temperature in which the ground state is the for magnetic state what is happening is if everything is free it will never get the for magnetic state let's fix one spin let's try to see what's happening well spin is like here nothing happens so the system is still in the for magnetic state and then you start to fix more and more spin and really they get fixed and this is the kind of curve you get from Monte Carlo so essentially when you fix like 30% of the spin then the system is able to get the for magnetic state and on the contrary if you use our dynamics in half of the result of Monte Carlo you get the right ground state of the system so again it is a wrong dynamics in the glassy phase but it's the wrong in the right way if you want to study what's happening in the glassy phase now that we have something that works in this kind of way for the glassy phase let's try to look to a more complex problem and then we move to the dynamics algorithms that were used in the past to study the NP-combinatorial optimization problem particularly 3SAT let's remember this is the problem, this is SAT 3SAT problem, in principle you have N variables so here N means 5 but in principle N is as large as your computer can accept or your algorithms can accept and take N and then you have M closes closes are these kind of formulas within brackets you say that this kind of problem can be easily translated to an easy like problem making this change of variables and this again translates into Hamiltonian which is a piece like Hamiltonian with this order if you want I'm sorry alright get crazy the problem essentially and then the picture the thermodynamic picture for many years within the community actually is more or less the correct picture now if you have if you are in this regime in which you have many variables of few constraints so the problem can be easily solved that's very easy, you have just one pure state in the system then if you have many close and a full variable so alpha is very large then the system is unsatisfiable so both these states are very simple and what is interesting that the prediction for many years was that there is this dynamical temperature this dynamical transition in which you have many states and there is the place where algorithms get blocked so after this the work was refined and there in between now I think there are two or three more transitions I don't know, something like that but more or less the general picture is that in this zone here or perhaps a bit further here you have the hard problems that was the intuition and that was in contradiction with some dynamic results some algorithms that do not obey the data balance and in particular this is metropolis focus search which is essentially a metropolis algorithm in which you instead of taking randomly the variables you choose variables only in those clauses that are unsatisfying and that's why focus instead of making a stupid metropolis you just flip a variable with a given rate randomly you just choose randomly the variable you say all right now I will look only to unsatisfying clauses and in these unsatisfying clauses I will choose variables and this is the one that I am going to flip and this is essentially on this red stop here if you threw down this line it's essentially a metropolis algorithm and there is a parameter which is eta I don't know why the author at that time call it eta but it's essentially e to the minus beta whatever it's a kind of measure of the temperature of the system and this is the kind of picture you get when you run fms for this kind of problem this alpha d and alpha c is where the system in principle should find our instances of the problem so in principle no local algorithm that was intuition at the time will work in this way in this zone and what happened in particular was that focus metropolis search follows more or less this pattern here a high temperature it works up to here it works up to here then it fails and then a low temperature it fails here and it's a particularly very well tuned range of temperature in which it essentially beats the dynamic transition temperature for many years there was a debate about I think perhaps there is yet a debate about this are finite such effects and then the proponents of the idea of focus metropolis search say no the problem is that we are out of equilibrium we do not obey the data balance in principle have to respect the results you get making equilibrium analysis of the problem and then what is interesting is that now we have a new set of dynamical equation that can describe semi-analytically this kind of algorithms and let's try to see what it gives to us this is a typical dynamics for the cavity master equation sorry essentially you have start a very high energy and this is for a particular value of eta you start a very high energy and when you see that when alpha is small it's indeed find the transition so the energy goes to zero and for given alpha that is large enough so the energy doesn't go to zero anymore right so in principle you find that it's more or less describe the suspected behavior of the system and then you want to compare this with the phase that with Monte Carlo simulation and this is the picture lines now are focal metropolis search lines represent cavity master equation dynamics and points represent focal metropolis search results and alright you see that for slow alpha it more or less reflects very well the behavior of the metropolis search and and of course does not coincide exactly where the transition is at least for this value of eta then you make this same equation this same simulation you do it for different values of eta different values of alpha and try to shape the phase diagram I show you before to see how well it compares and these are the results essentially so the triangle are still resolved for the focus metropolis search and circles are resolved for the cavity master equation and when you see that alright we don't get exactly the dynamics of the cavity master equation of the focus metropolis search we get a phase transition from SAT to SAT problem that is go a bit before but we follow very well the trend but what is interesting is that in this interesting zone so for low values of eta we don't we go very well to describe what is the focus metropolis doing and we really reach with these dynamics very near the SAT transition in fact you find here that you can go even better that the focus metropolis search so in principle indeed what we are trying to say is that effectively this kind of non-equilibrium metropolis can really give you some clue about the can give you a new opportunity to find solution of the optimized problem that in principle by equilibrium computation shouldn't be tractable and then I'm going to this general conclusion I'm going to the first cavity master equation I think constitute a good proxy to describe the local dynamics of stochastic algorithm with discrete variable I think it's not the best you can do probably in the future we will do better but at the moment I think is quite reasonably a good starting point to describe this kind of system to describe physical system but also to describe algorithms in particular so algorithms that you design to solve probably I hope it will be useful in the future for this machine learning community and in particularly it's very useful to explore the equilibrium low energy states inside glacial glassy phases so although we know that in the glassy phase it is in principle wrong if you want to describe right the equilibrium configuration or the equilibrium of or I'm sorry although in the glassy phase we know that it is wrong if you want to describe the metropolis dynamics it is wrong in the right way because it allows you to look deep into these glassy states and that's the good point on this stuff and with this I want to thank you everybody that's it