 Actually, I decided to change a little bit the subject of my talk to much better what Thierry was explaining this morning. Also because what I'm going to talk now is, I think, is one of, is very exciting in this field of non-equilibrium and statistical physics. It's also simpler than the super technical KPC stuff. So for students that have come here to see the talk, I hope it's going to be a better choice. So what I'm going to talk today, it's about non-equilibrium fluctuations of one-dimensional particle systems. So this, what I'm going to talk today is part of the PhD thesis of my PhD student, Octavio Menezes, from IMPA. And I'm going to define a very simple model on which we already have this feature about non-equilibrium fluctuations, about non-equilibrium. It's a system which is very simple but doesn't have explicit invariant measures in the sense of Markov chains. And it's a non-reversible system on which these features about how to deal with thermodynamics and fluctuations and these kind of questions are also present. So maybe it's not the most natural model, but it's the simplest one on which we can already see what is going on. So just to repeat a little bit the notation of the area and maybe fix some other different ones. So let's start with n is going to be natural number, is going to be the scale parameter of my system. So I'm going to consider a family of Markov chains indexed by this parameter n. So let me call lambda n the discrete circle with n points, which we can identify with the set 1n. And let me call omega n the set of binary sequence of length n. And from the notation it's more or less clear that I'm going to consider periodic boundary conditions on my system. So I'm thinking about the circle of n points. And now in this finite state space I'm going to define a Markov chain, a continuous time Markov chain, which by now should be more or less familiar to you. So let me first draw these intervals. So now, well, maybe crosses here. So I have particles going around these discrete lattice. These particles will have what we call, we follow what we call the simple exclusion dynamics. So they can jump left and right. And this model will be in continuous time, so the rate of jump is going to be n square. Okay. So I'm already introducing this diffusive scaling that he was mentioning this morning. So what happens here is that this particle tries to jump to the left with an exponential rate n square, which means that the time is so far the one of n square. The same thing to the other side independently for each particle. And there is the exclusion rule that tells me that this jump here is forbidden. Okay. So this is what we call the symmetric simple exclusion, periodic. Yeah. So, well, this is maybe some part of the interval. And we can see the periodic boundary conditions. And on top of this dynamics, I'm going to do something else. Because as Thierry was mentioning this morning, this system here has nice product invariant measures. It's reversible with respect to this invariant measure. So what is the non-reversible, non-equilibrium feature of the system? So we have to do something with it. Something we can do is to put these reservoirs with the different densities. But we can do it also in a sort of translation invariant way, putting a creation and annihilation of particles. Okay. So to this part of the dynamics, I'm going to add an extra feature, which is the following. So if a site X is empty at some point, at some time t, then with rate 1, we will create a particle there. Okay. So the model is no longer conservative. And if the site X is occupied, then a particle will be destroyed with a rate, which is 1 plus b times eta X minus 1 eta X plus 1. Okay. The exact form of this factor here is not very important as long as it's different from 0. And it depends on the neighbors of my site X in some way. As I read this morning, I'm going to call eta X the occupation number at site X of my Markov chain. So this is equal to 1 if there is a particle at site X minus 1 equal to 0 if there is a hole there. And you can check that this part here destroys the invariance of the product measures. Maybe as an Easter egg for people who might be interested in this, you can check that this rates here actually leave invariant the Gibbs measure of the IC model that you find this. This morning, that's not very important. But what is important is that the product measure that was the invariant measure for the exclusion process is no longer invariant because of this factor here. So this is going to be my sequence of Markov chains parameterized by this scaling parameter n. As you can see here, this creation and annihilation part is not accelerated in time. So there is no factor n squared or n or something here. So this happens at a rate which is lower in principle than the rate of which particles jump from side to side. In a sense, you can think about it as a perturbation, but it's not really a perturbation. It turns out that these two parts of the dynamics, they are comparable in size. In any meaningful way, maybe comparing eigenvalues or some other statistics that you want to see. In this model, you will see that both parts of the dynamics have a non-negligible effect on any observable of the system. So this is the model. I'm going to call it eta x tn. And in a moment, I will start to drop the index n from the notation because it's going to be present everywhere because everything depends on this parameter n. So this is my Markov chain that I want to study. And it's irreducible in this finite state because now the number of particles is no longer preserved. So it has an invariant measure, but this invariant measure is very complicated. As far as I understand, nobody really knows anything about it. We don't even have this kind of better ansatz framework or mapping to the there might be some mapping to some quantum spin system, but in this case, it's not really meaningful because it's not an integrable quantum spin system. So actually, I'm coming from a point of view, which is exactly the opposite of Malik, that I tried to derive methods which are somehow robust on the particular details of the dynamics. So we want to do something which does not depend on the particularities of each model. Of course, the results we can get from that point of view are much weaker than the results you can get from integrable systems. But this is a complementary thing because you have this object, this phenomenon, that you want to characterize. So you want to say from one side as much as you can about it, and from the other side, you want to say that this phenomenon is as much universal as you can. So we are working on the universal part, and with integrable probability, you are working on the fine description of the phenomena. So this is my setup. We'll have this Markov chain, and we want to study this Markov chain, and we want to, hopefully, to prove something about non-equilibrium frog questions. So let me do some definitions. So Cx plus is going to be this quantity here. So what are these quantities here? These are the rates at which particles are created and annihilated by the dynamics. So this is the rate at which particles are created. So this is the rate I wrote there. But you also need to have a hole there at position x. So you have this extra factor here. And C minus is the same thing. The rate is 1, but you need to have a particle to destroy it. So this is the definition. Let me also call mu rho. It's going to be the product that newly measured. If you want explicit formula, it's something of this sort. And as I already remarked, this measure mu rho is not invariant under these dynamics. But in some sense, it should be close to invariant, maybe. Excuse me, I think we're switching. Cx minus and Cx plus and Cx minus switch, probably. Yeah, you're right. I switched them. So I should correct one of them. So in my notes, I think I made my notes like this. So probably it's better to correct them in this way. Thank you. And yeah. So another quantity which is meaningful for this model is the function f of rho, which is going to be the average reaction rate on the system. So it's the number of particles that in average are created minus the number of particles that are destroyed in average. With respect to this, if your system is distributed with a newly product measure mu rho. So f of rho is going to be just the expectation of this guy minus the expectation of this guy with respect to mu rho. So it's 1 plus b. So it's 2b rho minus rho minus rho. A quick observation, f of rho is equal to 0 for rho equals 2. So I wrote down a formula which is totally irrelevant actually, but sometimes people like formulas is that f of rho is equal to 0 for that particular choice of the parameter rho. Actually what is important is there is some density, non-trigger density, for which f of rho is equal to 0. This is always true because no matter what I put here and here, f of rho is going to be equal to 1 at 0 and minus 1 at 1. So it's positive at 0, negative at 1. So it has to be 0 somewhere in the middle. Maybe it might be 0 even multiple times. Actually multiple appearances of 0s may be interesting for stability or things like that, but for the moment this is all that we need to know is that there is an invariant measure. So let me keep going with the definition. So let me call f t. It's going to be the density of the process, so eta t of my Markov chain with respect to mu of rho. So from now on I'm going to fix rho and rho is going to be always equal to this number here for which f of rho is equal to 0. So that particular point is interesting from the point of view for non-equilibrium and statistical mechanics because somehow the average rate of creation and annihilation is 0. So it means that at that particular point probably the effect of the creation and annihilation is smaller in the system. So you call f t the density with respect to mu rho. I'm going to assume that f of 0 is equal to 1. That is I'm going to start my system with the distribution mu rho. And I want to see whether the system stays there at later times or not. So I'm going to define now a number h and t which is just the relative entropy of my Markov chain at time t with respect to this product measure mu rho. Okay and well this torque is going to be a little more mathematical than the previous torque so in particular I'm going to state a theorem. Should be density you mean it's a function of x or? It's a function of eta. Okay so it's a function in omega n and so I'm going to state a theorem. So the theorem is the following for any time t positive there exists a constant that depends on this time t such that h n of t smaller than c for any time up to time big t and for any n. Okay so this is the theorem. Actually this theorem is very useful. It's telling you a lot of things about your system and also it's very surprising because you see that the omega n is a huge space two to the n elements and to be finite in such a huge space means that you're really close to this product invariant measure we were talking at the beginning. So although the product measure is not invariant it's very close to invariant. This is what this theorem is telling you. Moreover let's say that you take any statistics of your configuration space for which under this product measure you can prove convergence theorem large large numbers maybe central limit theorem. This bound here is telling you that it's not telling you that you can transport this convergence result to our system but it's telling you at least that those statistics they do have a limit it's not necessarily the same and it's actually not going to be the same but they do converge. So it's some sort of relative compactness theorem. The local ones it depends on whether your statistics depends on time or not and whether your statistics are increasing fast enough with respect to the size. Just look at the finite size. The finite size won't be the same certainly because it's just a finite entropy so you know that the distribution is going to be absolutely continuous to the actual limit. In some cases they're going to be the same because of maybe translation invariance or other arguments. But the point is that this is a very strong statement. Now I will try to convince you why such a statement should be at least reasonable. Let me just mention some observations. I don't want to enter into the details of this thing because it's very technical and it's complicated and there is not an easy way to formulate it but so this family of processes has what is called the hydrodynamic limit given by the following equation. So this part here is not surprising because it's exactly the same as in the simple exclusion process is what Thierry described before and now you have a reaction term which is f of u which is also not also very reasonable because it's just the average rate of all creation and annihilation. So what is going on usually in these hydrodynamic limits is that realistically the system is close to product in any finite but large box so when n goes to infinity if you if you fix a box of size I don't know 100 or maybe log n inside there things look like a product measure. Therefore when you look at the at the density as a global object there is averaging and you get that the density as a global object evolves with this pd f is the same function that is written there. So of course and this result can be understood as a little larger numbers and of course there is one particular solution of this equation which is interesting for us which is that a rho so u x t constant equal rho is a stationary solution of the hydrodynamic equation. So this fact that the rho is a stationary solution of the hydrodynamic equation hints you at the fact that at least in that scale n square on which the hydrodynamic equation appears this product measure shouldn't evolve too much and so for this reason you may maybe you may expect a result of this sort. Actually this is something you can check let's say that you want to change the global density of of your product when newly measured large deviations theory tells you how to do it you have this exponential to the minus n generator etc and so what is important is that n in the large deviation principle that tells you exactly that the entropy cost to change the density in a box of size let's say epsilon n so something that should be observable unobservable for the macroscopic system is of order n so you need at least delta n entropy to change the density of your system therefore this fact that the rho is stationary tells you that the entropy should be little o of n and this is something that can be proved that the entropy is little o of n in this case and this is actually called yaw's relative entropy method for hydrodynamic limits and it has been a it's a well-developed part of the theory of interacting particle systems and usually that's what you can prove so exactly it's more general because you can you actually compare you can you know you don't need to start with a constant product measure you can make it evolve in time but the only thing that I want to stress is that if you are satisfied with large numbers what you need to prove is that the entropy is of little o of n so at the level of large number of macroscopic observables this is all that you need in the other hand once you have a large number a natural question is about large deviations or central limit theorems large deviations turns out that it's a it's a simple problem than central limit theorems for well for some reason and then you can check that if I give you a finite amount of entropy I can actually change the clt because for example something I can do with this product measure is to change their density rho to rho plus one over square root of n that is something that at each side produces entropy of order one over n so if you sum over n sides you get an entropy of order one and now this plus one over square root of n allows me to change the the mean and also the variance of the Gaussian random variable in the limit so at the level of if you want to prove something like central limit theorems this is actually the least you need to prove because if I give you finite entropy I can change the the variance in the clt okay so therefore if you believe that you can some sort of central limit theorem is true is true for this kind of systems then you can start to believe that this theorem might be true and since for these systems large deviation principles have been proved you are attempted to believe that the clt is also true it's also from the polyuristic point of view these results should be reasonable so why yeah no no no actually this is far from being true for for probabilists it's um something which is true is that if you are able to prove both you can recover the variance looking at the large deviation at the at the expansion around zero of the large deviation function this is true so taking the large deviations principle for this model and expanding around the equilibrium or around whatever point that you are interested in you will obtain the variance of the of the Gaussian process that you should obtain and but it's not true that and it's actually much more difficult to prove a central limit theorems in the concept of interatomic particle system than the large deviation principle and the and the reason is it related to this some house to this kpc business because the objects the space time processes that will appear are these non-linear uh stochastic partial differential equations which are very uh delicate and difficult objects okay so this is why actually clt is more difficult than the large deviation um actually it's not naive because when you will learn probability we learn the other way around that large deviation is more difficult than the clt and so and this is why i like a lot about this this result here is that this is a general fact about markov chains that has been kind of overlooked a little bit but everybody probably knows the following inequality so let me just write the following so now i'm going to enter a little bit into the proof of this theorem okay so from here you can see that i'm reading mathematician because okay who cares about proofs but i think that this particular theorem has a very interesting proof and we can learn something about the proof this is what i like about proof that when you can learn something about the proof it's not that the proof by itself is something interesting and so if you have a markov chain and then you take any reference measure and you compare the the the entropy of the law of your markov chain with respect to the with respect to this reference measure you have the following inequality so you take the entropy so well if you want to prove something like this how do you proceed the usual way that you do is okay let's take the derivative and let's prove that the derivative is bounded okay this is what i'm going to do so and well do you take the derivative and you start to bound until you bound it by something constant uh so what you have here and this is totally general is that the entropy the derivative of the entropy is bounded by this expression here so let me explain what these guys are so i will start with this guy here this is what people call the direct waveform in our case it's not really the direct waveform because the the measure mu rho is not in it's not the invariant measure but it's still a positive quadratic form so where so d of a function h is the following so it's n square the sum over x of the integral of so i hope the the notation is not too complicated it shouldn't be so this gradient here is a discrete gradient that this is the difference in the function h when i move a particle from x to x plus one or vice versa okay so it's the rate of change of h when i give one of the jumps of the exclusion part of the dynamics and this thing here the same but for the reaction part okay so nabla h of x is how much the function h changes when i create a particle at x or i destroy a particle at x okay so those are nice quadratic forms and this turns out to be what people call the declare form but for the case on which b is equal to zero okay and now i have to explain what is l n star so l n star is very easy it's just the joint of the generator in l2 mu rho so when this measure mu rho is invariant the adjoint of the generator is the generator of a mark of chain so when you apply it to the indicator function equals this is not this is the constant function equals to one so when you apply a mark of generator to a constant function you get zero so in the case on which mu rho is actually invariant this term here is zero this term is positive and you recover something very well known that is that the the relative entropy with respect to the stationary state of a mark of chain decreases in time at that the rate of change is bounded by the declare form or maybe you can call it you want to call it fissure information or or or something else energy i like to call it energy and the but when it's not invariant you get something that might be increasing and it's actually has to be increasing at some point because the the measure mu rho is not the invariant one and at the end when t goes to infinity you converge to the to the real stationary state which is not mu rho and the entropy is is different from zero so well here since you are have a finite mark of chain actually this adjoint thing is very easy to compute because you just have a matrix and you have to to to compute the adjoint is basically some sort of a weighted transposition okay so but this is something which is general is true for any mark of chain any measure that you put there and well sometimes give you some useful information sometimes not and the the name of the game now is to choose as a reference measure something as close as possible to what we believe should be the stationary measure okay so if you succeed in in that part then you may get something which is not very big as the l n star of one so in our case well you can go and compute it's not very difficult because everything is explicit this is just a few lines computation you can compute l n star of one is equal to what the sum over x of phi x this is easy to understand that the dynamics is translation invariant so this function is translation invariant so you should get something translation invariant so you get this and what is this it should be something local because the dynamics is local and well I have an expression actually maybe you can write well it's equal to that but you also can write it like this which is a little bit more natural because it's actually what pops out from the computations doesn't really matter the point is that this object here is what we call a quadratic function what does it mean a quadratic function imagine that okay of course the expectation of this function with respect to the measure mu rho is zero but imagine that I didn't take the right density okay so I took another density rho prime which is not rho and I compute the the expectation of this function with respect to this new product measure rho prime if you do that you will get well this constant doesn't matter you will get the difference rho prime minus rho to the square so the the deviation when you have a something which is not the the right density in the expectation of this form of this function here is quadratic and that's the key since it is quadratic this deviation we can get a very nice bound on this expectation there if it weren't quadratic it doesn't work you will only get something like a little o of n as a as a as a as a bound and this is the this fact here that this function is is quadratic in that sense indicates that you are really choosing the right reference measure on your computation okay and so there is something called let me quote okay so now let me write a lemma so this is another thing which is very common among mathematicians of course there's other expectation there I can write it as the integral with respect to ft okay because ft is the distribution of the is the law of the of eta t so I can write it as an integral of that function with respect to ft the function ft I don't have a lot of information about it it's the it's basically if I knew what ft were I will know I will know everything no so you say okay let's forget about what ft is and let's see if we can prove something which is true for any density f and so lemma lemma is the following so this is for any delta bigger than zero there exists a constant c finite such that for any f for any density the following holds okay so you can bound the integral of this sum here with respect to the function f by the Dirichlet form of f times this constant delta and the entropy so h of f is the entropy so h of f okay so if you assume that the lemma is true then the theorem is proof because what happens that first we choose delta is small enough to to be compensated by the minus c there then you get something of the form d dt h smaller than this factor here and now you use well you say for example grungwell no or or whatever I mean even just just can say that the solutions of or these are unique or something then you can get a constant okay so it should blow up really different from the one because and actually so so the invariant measure okay so notice that let's say that b is equal to zero then mu one half is invariant but it's not only invariant it's also reversible so you are comparing your non reversible dynamics with a measure which is reversible with respect to some dynamics which is looks a little it looks very close to the to the real one but it's reversible so there will be some observable that takes into account this non reversibility thing that will tell you now you're not really in the reversible situation so this this this variable should be the current okay here okay here there are no currents in the spatial sense no but you can say well you can have another definition of of current maybe a heat flow something like that because the current is for the whole dynamic the whole if you look just at h n of t that gives you the the distribution of your guy at time t and this you expect will be singular with respect to the Bernoulli product measure when t goes to infinity because you have this current that is evolving in time and it's creating the non the non reversible features of the of the model so actually you can use this this theorem to prove an actual fluctuation theorem so to prove that convergence to some stochastic PDE and for those stochastic PDE you can you can ask the question what happens at equals to infinity and you start to see the the non-reversibility issues so this constant should effectively blow up in t it shouldn't blow up too fast so we expect it to blow up like polynomial in t okay from the ground wall you just get exponential but it should be just polynomial but it should blow up in t because of non-reversibility okay and then here if you interpret this system as some sort of for particle system in contact with some chemical reservoir you can talk about chemical currents and then everything makes sense and you can say okay this this current there is the one that will detect the non-reversibility of the model okay so um well so far so good so how much time do i have um okay so well if you accept that this lemma is true then the theorem is proven that is it i think it's a very nice result because it's really telling you that at least at the at the scales on which we are interested in this non-reversibility of the systems is uh is a smooth phenomenon so you develop these long-range correlations these characteristic features of non-reversible systems in a smooth way when you when you start from something which is close to the uh let's say equilibrium setting so here um we are trying to show how systems go from this well known phase today to the to the more complicated non-reversible thing okay um so let me see if i can actually tell you a little bit about the proof of this lemma so actually there is something that i just want to quote what what is really going on here is that each time this is a one-dimensional phenomenon each time you have a local observable which is quadratic in the sense that i described before of a nice inter-actical particle system you can show that this local observable is very well approximated by the square of the density of particles in a box of macroscopic size okay so this is uh something that um something we call the second order bolzmann gives a principle which is uh something uh we introduced with the with patricia gonsalves in the context of the kpz equation um which tells you what does it tell you this okay it's not the same but it's close in spirit and actually uh it can be proof that the proof is very similar at least in spirit and it tells you the following so this is actually what the thing that is uh is uh is bounded uh let me see if i am missing some constant somewhere so the variance of this guy there should be a square root of l here so this is a further one then the variance is rho i minus rho there is also rho i minus rho so there is there's probably b o times rho and there is and there is probably here square root of rho one minus rho okay so if you do this then you can uh this is what is bounded by uh by this okay so the main uh the main idea this is something that happens very generally in a one dimensional systems is that when you have a quadratic function in the in the sense uh in this sense here for example then you can approximately very well by the square of the density of particles um properly normalized uh and the cost you pay to do that is the is the is the is the directly form okay so this is what we call the the energy estimate uh in in in a work with the with Patricia and from here you see that we are almost at the at the end of the lemma because once you are here you just need to prove that this part here can be bounded by the by the entropy and this is just the entropy inequality uh with the extra element that this guy here is very close to gaussian distribution okay so once uh you have understood that the that this kind of local functions can be approximated by the square of the density of particles then this kind of results are uh are very reasonable okay so well this is more or less uh how you can prove such a theorem but it also hints you at how can you obtain a uh more refined results because uh what actually this the this inequality here is telling you is that you can approximate at the level of the macroscopic evolution any local function of your system by a combination of a linear and a quadratic function of the density of particles and because let's say imagine that here we have uh you we have um it's not purely quadratic you we have a linear term then what happens is that actually the linear term is okay because it's the density of particles so if you now go back to the beginning and you say okay let's try to prove for some sort of a fluctuation result associated to the hydrodynamic limit what you need to understand is how the evolution of the density of particles behave as the scale of your system goes to infinity so in principle since you are talk we are talking about a huge uh mark of chain with two to the n states the density of particles which is uh uh roughly speaking is it's order n variables you know because you can imagine that we are taking blocks of size 100 to compute some average density and you just keep track of these numbers there uh will not describe everything no will not describe any observable of your system but once you understand that you can actually any local observable of your system express it in terms of the density you can try to obtain a closed equation for the density of particles okay and this closed equation at the level of fluctuations in one in one dimension will involve two elements the a linear part and a quadratic part because of the of of of this lemmas here and when you have a linear and a quadratic part there is a finite set of equations that you can obtain in the limit actually you have basically two possibilities either the linear heat equation stochastic equation or the kpc equation so once you have proof this here and there about the the entropy complemented with the with the method of proof you see that you can actually try to tackle the problem of what happens with the with the observables of the of your mark of chain macroscopic observables of relevance of your mark of chain in the limit when the system goes to infinity okay well i think uh i will stop here thank you very much for your attention any question can you extract some result on the fluctuating i2 dynamics once now you control quite well so a priori what that depends on the level of rigurosity you want because that depends on the level of of phase you have in tines okay so let's say that you can prove tines in some nice topology that is enough for your process so it's good enough then in that case what you will prove for this system for example is that when you do the the natural scaling of the density of particles so you take something like you define some field xtn so you have to use test functions because at the level of fluctuations this guy has the bad taste of being a distributions no so well this is what it is you have to do it like that you use some test function and you you will get that in the limit this field here will be solution of this equation here so in general uh well let's write this this the case that i discussing here and then there will be some noise so this is a white noise this part of the noise comes from the from the exclusion part so okay so here there will be some square root of rho i minus rho and then there is a second noise coming from the from the creation annihilation which has uh on front of it g of rho to the line where this g of rho is uh is is uh you put some here okay so well some function it's some constant that is related to the rates of creation annihilation so this is what you can prove for this particular system okay so this for this particular system you can prove that the fluctuations revolving this way notice that here you have this number f prime of rho so f prime of rho can either be positive or negative if it's at that at this scale it doesn't matter because this equation is well posed for any time t but when you send that to infinity this equation will converge to a to an equilibrium measure if and only if f prime is uh non-positive if f prime is positive this will start to blow up exponentially in time okay in that sense you can see that you cannot do better than that in the other hand when the when f prime is negative you expect to be able to prove that the that the constant doesn't blow up too fast in time so it depends on the situation okay and and somehow the the entropy bound has to be sensitive to what happens when t is big because it's something that depends on the whole distribution of the of the market