 So first of all, thanks a lot for the invitation. If I understood correctly, there are people from a very diverse background, from mathematics, from physics. Some are at the early stage in the research. Some are much more advanced. So I'll try to give an overview and feel free to ask questions or interrupt. The theme is trying to explain how the theory of large deviation, which is mainly a mathematical theory, has been used in a context of non-equilibrium statistical mechanics. So I will just show only a few examples in order to illustrate this connection between the two theories. And this example will be detailed later on in the afternoon by Kiran Malik and Milton Jarrah. So let me start with first tell you or remind you what large deviations are. In the second part, since I want to talk about non-equilibrium, and non-equilibrium is relatively vast, I'm going to focus first on the case of equilibrium statistical mechanics and remind you a few aspects of easy model. And in the third part, we shall turn to non-equilibrium statistical mechanics and to some specific stochastic dynamics. So this is the plan of the talk. And I'll start with large deviations. So large deviations is about understanding some statistical properties of random variables. And you know that if you start with some XI, which is a collection of random variables, which are independent and identically distributed, then I shall use maybe ID in the rest of the talk. If you start from a collection of these variables and you look at the average, some i between n XI, then this is going to converge to the expectation in the limit when n, so they all have the same distribution. And when n goes to infinity, this is going to converge to the average almost surely. Let's suppose that the average is equal to 0. And let's look at a slightly different renormalization. Then again, there is a very famous theorem in probability known as a central limit theorem, which says that this converges to some Gaussian variable with variance given by the expectation of the square. And that is a convergence in law. So the interpretation of this is that basically when you look at the distribution of this random variable over there, what you see is something which should be picked at close to 0, which is the expectation, which is the mean. And the collection of this sum, when n is larger, should start to concentrate close to 0. And of course, n is finite at some point. So you have a small fluctuation around the mean. And this fluctuation is of order 1 over square root of n. And this is the contents of these two theorem, which are this one is the law of large numbers. And this one is a central limit theorem. The question we'd like to address is, what's going on? Can one quantify the fluctuation which are not only close to the mean, close to 0? But what can we say if this random variable, say, it goes beyond this value x? So far from the mean, which means what is the probability of observing a deviation is indeed a large deviation, because it's absolutely not supposed to happen that this quantity goes beyond x. It belongs to this line, or to this segment above x. It's actually very easy to quantify this. And I'm just going to write even the proof of that. So you have an idea of what are the tools used to understand this. So this probability is nothing but trying to prove that you have the sum of i equal to n of xi minus nx larger or equal to 0. So far, so good. I mean, it's an equality. There's nothing new there. And I can even time it by some value, lambda. Here, provided it's non-negative. Sorry, let's say that will be better. And it's still the same equality. It's an identity. And the probability that some variable is positive, it's the same as proving that the probability of the exponential of this variable is larger or equal to 1. So in trying to understand that, it's just the same as this. And now there's a small inequality which you may know or not. But you can bond this probability that this variable is larger than 1 simply by the expectation. So this is less than the expectation that exponential of lambda sum i equal 1 to n of xi exponential. That's the rest. This is a constant minus nx lambda. So this is just an inequality known as Chebyshev inequality. And once you're there, you're going to use the fact that all your variables are independent. So over there, this exponential of the sum is, again, nothing. Now it's an equality of exponential. So I write here exponential minus nx lambda. And then this is just expectation of exponential x1 to the power n, just because I use the IID, the fact that they're identically distributed and independent. And now it's time to introduce a notation which we're going to see several times during this talk. Let's denote by phi of lambda this quantity. In fact, a small modification of this quantity, the log of this expectation. Well, I can now rewrite this statement as follows. The probability that 1 over n sum i equal 1 to n of xi larger than equal to x, this is less or equal than exponential of what? Minus n lambda x. And then this I replace by phi lambda. So minus phi lambda. Just with this new notation. So far so good. I've been able to do that for any lambda. So in particular, I can essentially take the optimum, the best value of lambda in order to have the best inequality. And which leads to define phi star of x. It's just the supremum of lambda positive of lambda x minus phi lambda. In fact, and then this means that since this is true for any lambda, which was the parameter over there. And I should come back later to the rule of lambda because it's very important parameter. But I can optimize over lambda and say, well, this is just less than exponential minus phi, sorry, n phi star of x. So what did we learn from that? Maybe I should plot the graph of typically of what phi star looks like. And phi star will be a function. Here, the mean of these variables is 0. So phi star will be 0 at 0. You don't expect anything fancy there. And then maybe I should have drawn it slightly. And then phi star will be a function, strictly positive outside 0. And possibly not symmetric or anything. I don't know exactly what will be the shape. And in particular, if x stands here, phi star will be something strictly positive, which tells me that this probability decays at least exponentially fast when n goes to infinity. So it's very rare to observe these kind of events. And the probability decays exponentially faster. And this is what we call large deviation. It turns out that we can actually take here the supremum over all lambda in r. Doesn't matter, so I will keep that. And that's about it for the proof. Now we can be a little bit more precise. I just derived in a few lines the upper bound. It's not much harder to derive a lower bound and to prove that the probability of 1 over n sum i equal 1 to n of xi, larger or equal to x, is of the order exponential minus n phi star x plus something which is smaller with respect to n. I don't want to quantify it. So through this upper bound, we got the leading term. The leading term vanishes exponentially faster with n. And we got actually the correct factor, which is phi star. So the large deviations, phi star could be plus infinity, phi star could be 0 sometimes. I didn't put too many assumptions. But for some reason, if your random variable is only positive, you can't reach negative values. Are there any questions? OK, so this equivalence, in terms of large deviation, usually we write it in the following way. It just means the same thing. We take 1 over n, the log of the probability, 1 over n sum i equal 1 to n of xi, larger or equal to x. And we take the limit when n goes to infinity. And what you get is that this is equal to minus phi star x. Just a way to encode that we have a dominant correction at the exponential level. So this small of n is the correction which, sorry, we have the dominant term. And small of n is vanishing. In general, we can do much better than just proving the probability that we are above some value x. We can just say, OK, what is the probability that I belong to a certain set, which may have several parts, like can be here, I can be there. And let's call this set A, which is included in R. And this large deviation statement can be stronger in the sense that we're going to say, what is the probability that we belong to a set A? And you said, not necessarily this interval. Well, it's the same, except that it's less than minus. And I take the infimum over all the x in the set A. So in the previous example, it was easy to understand what was the probability to be above x. It was given just by the infimum of all of the function phi star in this set. And the infimum, since it's convex function, was realized for phi star x. It's quite easy to understand. If you want to see something atypical, it's going to cost a lot. So if you want to be beyond x, I mean, there's no point to go and look for an event which is even more rare. You just stop at phi star of x, which is the value of realizing the deviation with x. And in general, for a set A, you have this statement. Well. Do you express your assumption of the law or not? Or is it perfectly general? It's relatively general. In what you, I mean, you might need some assumption also to avoid empty statements. For example, phi star could be 0 or stuff like that. So you need, in which case, if you don't have exponential moments, the CRM is void. I mean, there's not much. So far, so good. So this is the type of statement large deviation provides. And this is the most standard statement in the case of ID variables. And we would like to generalize, to extend these statements to more general setting. In terms of practical matter, it's actually this kind of prediction. As you can see, tells us that the probability is vanishing exponentially fast. So if n is large, if you want to apply that to statistical mechanics using n to the power 23 or something like that, it's essentially useless. I mean, you'll never see this kind of events. And this is, I mean, there's not much reason beside mathematical purpose to look at large deviations. You can imagine, for example, just to have a concrete experiment since it's mathematical physics seminar. So you can look at this elastic band, and you may think, OK, I'm going to observe a large deviation. And this large deviation will be the fact that it's going spontaneously, it's going to stretch. So it may happen because all the atoms in this room, for some reason, are going to decide to have a collective motion, and suddenly they're going to bump right there, and you're going to observe this elastic band stretching. Turns out that this may occur, but you have to be very patient. And the most patient of you, if you wait maybe time legs of your universe, you might see it. But otherwise, you won't. And that's essentially what is quantified in the probability of these rare events. It's almost never happened. So the question is, OK, why are we doing that besides the fact that it's a mathematical statement, and why not? One way I would like to maybe convince you today is that if I wait for this to stretch, I may not observe anything. But on the other hand, I can pull it. And if I pull it, it does stretch. So somehow, this rare event is related to some force, or I can try to provoke this exceptional event by adding some force and measuring something. And that's more or less where the large deviation can connect to a real problem of non-equilibrium statistical mechanics. So maybe there are certain things we can measure. And another point of view on this issue of relating the large deviation function phi star to real stuff is the following remark, which we shall use later. So at least in this case, one can prove that phi is a convex. Over there, you might need some assumption that is a convex function. Then it turns out that phi star will be also by construction convex. And you have the following relation that phi lambda, which is a function we started from, can be recovered by Legendre transform from phi star. So in a sense, maybe we have looked at very rare events, but we have lost no information, at least for this kind of independent random variables where phi star is a very reasonable function. And if I have phi star, I can go back to the Laplace transform of the random variables, which we know contains a lot of information. So maybe we are looking to rare events, but maybe also this encodes some important information on the actual system. So the next part will be about easing model. OK, I'll try to keep that. So I want to just give this simple example of equilibrium statistical mechanics in order to explain how the large deviation has been used or why they are relevant in this case. And then we shall see. I mean, this is just a motivation after to use it in a different framework. So easing model is the following model, which is supposed to model all sorts of things, but in particular, the phenomenon of ferromagnetism. And at each, let's say, consider a grid with n sites on each side. So I'll call lambda of n is 1 to n to the power d. And d here will be larger than 2 and equal to 2 on the board because otherwise it's hard to draw. And on each site, I will have a spin that is site i. And the spin belongs to plus or minus 1. And it will be associated to the magnetization of the atom which sits right there on this site. And the point is that the spins interact with each other. And we shall use a very simple formulation for the energy that the total collection of the spins, sigma lambda n, just all the sigma i for i in lambda n. And the energy of the whole configuration will be minus sum of all the pair interactions. So this i is going to interact with its three neighbors. So essentially, if I have a i here, it will see only its four neighbors over there. So this is i interacts only with the neighbors. And we have sigma i, sigma j. So the idea is to say that the energy associated to this configuration will be such that all the spins want to align in order to get the lowest energy as possible. So this force is either, I mean, if you want to minimize the energy, all the spins are equal to plus 1. All the spins are equal to minus 1. And if you want now to set a measure to describe the statistics of such a configuration, we define the Gibbs measure, which depends, say, on the domain lambda n. And it will be the probability of observing a whole configuration, sigma lambda n. And this is given just by the Gibbs measure, exponential minus h of, so you have the energy of the configuration. The energy is written over there. And then you need a few features to add. One is a parameter beta, which will allow you to tune the temperature. So beta is the inverse of a temperature. I should come back to that later. And you want to say that this is a probability measure which describes your system. So you have to normalize that to get a probability measure. So this is just a normalizing factor, z beta lambda n. So z beta lambda n is just the sum over all the configurations such that this, the Gibbs measure, is a probability measure. So this now depends on the parameter beta. And this is the Gibbs measure. So nothing for the moment. So when I'm here, maybe I don't interact with anybody. I can have periodic, it's even easier. But we won't go too much into details. But I take all the possible interactions for all ij in lambda n provided they are neighbors. Thank you. Now we want to add a feature to that, which is a magnetic field. So let's say that I would like a minus. At each side, I will add some interaction with a magnetic field. And so if it's 0, it was the case before. Now if h is positive, the lowest energy will be given by the spins which are all in the plus direction. Because I have the collective interaction of the spins together plus some extra parameter, external parameter, which is how the magnetic field is tuned in my system. I will add that here. So it depends now also on the magnetic field. Maybe I should have indexed the Hamiltonian with the magnetic field. And the point is that the Gibbs theory essentially allows us to describe completely the statistics of the system. Maybe it's hard to study. Maybe it's very complicated to get the critical exponents or to have precise information on the measure. But at least I know how to write the measure. It depends on the energy and on a few parameters, which can be tuned, either through the temperature or through the magnetic field h. So z depends on h. And one way to access easily to information is to define the notion of free energy. So what is a free energy? It's simply rescaling this normalization constant. So over there, I have something which has a size exponential n to the power d, which is a number of sites in my system. And I would like to say, OK, the free energy will be now only a function which depends on beta, which depends on h, and where some averaging has been done, one over n to the power d, which is a number of sites, log of z, beta, depends on beta, depends on h, and depends on lambda n. So this is now the quantity, which is very much used in physics in order to describe the system. And what does it look like? If you plot the free energy in terms of beta, you can detect phase transitions. So the free energy, depending on beta, might look either like something like that or, so this is h, this is f beta h over there, or it may have a singularity at 0. It's not differentiable there, and you have f beta h. And what is the connection between these two? Well, this happens only when beta is less than some critical value. And this is the typical case when beta is larger than some critical value. So through these functions, through this free energy, you can somehow analyze the system and detect certain things. How can we interpret that? Well, it's not difficult. Let's see, imagine the important feature here, at least, which is striking compared to the large deviation I've been talking about, is that this thing can be not differentiable at 0. You can have a casper. So let's try to differentiate the free energy and see what's happened. So free energy is a limit. I'm not totally sure. I know how to differentiate a limit because the end of the day I'm a mathematician. But for n finite, I can try to differentiate this part. And if I take the derivative of the log of the partition function, what will pop out? So let's say, we'll do as if we were allowed to take the derivative. And that should be more or less given by what's happened for n very, very large. And in which case, it is 1 over n d. So when you take the derivative of the exponential with respect to h, you see that h is in fact in fact of the sum of i sigma i. So you see that h is actually just connected to the total magnetization of the system. So if you take the derivative of this z, ultimately what you get is the expectation with respect to the Gibbs measure of the sum of i equal 1 to n sigma i. So if you, now I'm running out of both, I will come back later to this remark. And beta qui traîne, ah, j'aurais pas dû le mettre là. Bon, OK, merci. I think it's easier to put the beta here, actually. Sorry. OK, so maybe we should remove the beta over there and keep it only in the energy. It would be neater, sorry. Because I want to take derivative with respect to different parameters and not hand them all together. Thank you. Is beta also part of the second term? No, no. OK, it was before in my notation. It was not such a good idea because, OK, I would rather to keep the different parameters independent. Otherwise, it's all the same thing. Except that it will be more messy on the board. So you wrote the limit. You defined for energy the limit, but does it have a limit? Yes, so that I wanted to come back to that later. But indeed, this can be shown that this has a limit. And one can show also that the derivative makes sense, at least on both sides, I mean to the left and to the right. And actually, there are CRMs known as Li-Yong CRM, which ensures that as soon as h is different from 0, it's a very functional analytic in h. So it's very strong. So now, what does it mean? It means that if this derivative had a singularity, it means that this free energy is measuring some kind of jump, at least in the limit in the magnetization or in the expectation of the magnetization. So if now I define mn as the average of the sigma i, which is the magnetization of my system, or the average magnetization, if you want, if I define that, I can plot and relate these graphs to the following things. So what is the expectation of mn? We have two different types of graphs with respect to h, either it will be between 1 and minus 1. So this is the graph of the expectation with respect to h, namely the derivative. Either I observe something which goes smoothly to 0, or if there is a casp in the free energy, one can observe some jump here. And when n is finite, there's no jump, but it's a seriously sharp slope over there. So when n is larger, this is the graph of the magnetization. So that's more or less the way things are usually encoded in physics is through the free energy, because the free energy contains a lot of information on the system. And taking derivatives of the free energy gives you a lot of description of magnetization, of correlations, and of interesting parameters. On the other hand, if you try to look at what we did, before for the large deviation, we've been interested in averages of 1 over n of this xi, which looks very much like this magnetization, except that now there's a huge interaction in the system because of the nearest neighbor interaction between the spin. And what we did is that in order to understand the fluctuation of these random variables, we introduced a function phi of lambda, which is the expectation of exponential lambda x1. So let's look now at what would be the natural analogous of phi of lambda. And the natural analogous is nothing, but let's compute the exponential of lambda times mn. So it's not mn. mn has been normalized by nd. But so it's sum of i in my domain, lambda n of xi. And if I compute this expectation under the Gibbs measure, what do I see? I'm going to reformulate that only in terms of the partition function. And it turns out that this expectation, sorry, I want to compute that when h is equal to 0 lambda n. So I want to compute this quantity when there's no magnetic field for the Gibbs measure. And it turns out that this is nothing than at the bottom, have the normalization with respect to z with h is equal to 0 in the domain lambda n. And when I sum, what do I see? In fact, it's nothing than the partition function. But with now h, which is equal to lambda, the domain. So computing the Laplace transform means that actually what I did is I simply looked at the partition function with a different h. The magnetic field is actually the right parameter in order to compute the Laplace transform. And once you see this analogy, and if you remember the remark, which I said at the beginning, the Laplace transform is very much related to the large deviations. And therefore, through this computation, if one passes to the limit, there should be a connection between the large deviations for the Gibbs measure and between the free energy. So what are we interested in? We are simply interested in taking the limit of that when n goes to infinity. Time is running, so I will just give you the statement if one tried to redo exactly the same steps which were written here. And I replace now this by sigma i. So phi lambda will be now replaced by the free energy. I look at the probability under the Gibbs measure when h is equal to 0. Well, what do I see? I can prove that observing some deviation, let's suppose that the average mean is 0. Observing a deviation of the magnetization will be equal or will be of the order exponential minus n to the power d, the number of variables. So now i belongs to lambda n, n to the power d, and some function i beta of x. And this i beta of x is just related by the same formula as we saw here, by a Legendre transform to the free energy. So now the analogous of this relation is just to say, well, i beta of x is nothing but the supremum over all the h in r of what? Of hx minus, and it's not phi lambda anymore, but as we saw, the natural connection now is to replace this phi lambda we had before with the free energy. So it's the free energy at h. There's, in fact, a small minus the free energy at 0, a small normalization factor there. So let me summarize. If we look at the large deviation of the magnetization in the using model, at h is equal to 0, can be a very high event, but I can compute it a little bit like for the case of independent variables in terms of a function. And this function is nothing but the Legendre transform of something we know is important in physics, which is the free energy. So let me draw now what is this function in these two cases. Well, this function, in the case when beta is less than beta critical when there is no phase transition, this function looks more or less like that. And when beta is larger than beta critical, the cusp here is becoming a flat piece in the function i beta. So here it's flat. It's equal to 0. And this relates the cusp. So this is i beta of x. And through, again, i beta of x. So through the large deviations, I can observe some, again, the phase transition in possibly indirect way. But for example, the large deviation function will have flat pieces. And the point is that, and that was a previous remark, in case of convex function, you can always go from the phi to the phi star and vice versa. And therefore, if you know the large deviation, you can recover the free energy. So this example is very well known, just a few names because of people who have been working a lot on that. I guess Landford was certainly the person who made the deepest connection or the earliest connection between large deviations and the statistical mechanics aspects, and also all the founding father of statistical mechanics or rigorous statistical mechanics as Ruel and Dobrushin have been working on this type of connection. And also I would like to mention Fister, who has a very nice lecture notes on all these aspects. OK, so that was just a motivation why large deviation can be of importance in physics and how it's related to certain quantities which are well known and where somehow there's already a very nice theory. In a sense, if you didn't like this pattern using model, all the rest is independent of that. It was just a motivation. Can you say that i is related to the entropy? i is, yes, there's some relation to that indeed, yes. So I mean, there are variational formulas where this i can be expressed in terms of indeed minimizing some energy and entropy. Essentially, anything you find in the free energy, you can find it in i and vice versa. So now the question is, what about non-equilibrium, which was the main focus of the talk? And that's the third part and the longest non-equilibrium. What do we mean by non-equilibrium? It's complicated to define essentially anything which is not at equilibrium. But concretely, one example would be, imagine you have a source of heat at a certain temperature and you have a kind of metal rod separating it to another source of heat at a different temperature. You observe in this metal rod, you observe a flux of energy flowing, say, from the highest, the reservoir with the highest temperature to the one with the lowest one. So you have a flux in your system, which can be a heat flux, which can be, for example, if it's not heat reservoir, but particle reservoir with different density, you can have a particle flux through the systems. And what we are interested in is trying to describe the property of this, the statistical properties of this rod, say, in a stationary regime. Maybe at the beginning it's complicated. Then you connect these two reservoirs by the rod for a very long time, let the system run, and try to say something about this system. And it turns out that there is no nice Gibbs theory for that. You can't write the measure in terms of a simple energy and say, I'm going to study this measure. The way it's defined is much more indirect. It's through some dynamics. You have, essentially, if you think in terms of particle, which is what we're going to focus on, you have many particles over there which starts to interact and move along this rod and end up maybe in this other reservoir with a lowest concentration. And all this is prescribed only through some dynamics and not at all in terms of a certain measure. And the question is, OK, can we say something? However, if the system is more or less well defined, it will reach a stationary state or steady state. And can one describe the steady state and say something on the statistical properties of this? And since there is no analogous to the Gibbs theory, since there are new features like the transport now in this non-equilibrium system, we don't have many tools or even a theoretical framework to describe that. And one of the goals is to say, OK, maybe the large deviation was something which was working fine in equilibrium. Maybe it brought quite a lot of understanding in equilibrium systems. So let's try to look at the large deviation for this kind of dynamics. And that's the goal. Of course, a non-equilibrium is everywhere. I mean, this source of heat can be the sun. And maybe we're going to receive some energy from the sun and do some action. It's very complicated. So here we're only going to talk about simple model, which are just supposed to give us some understanding or trying to build some understanding of this non-equilibrium system. There's no general theories. There's nothing totally complete which would allow us to describe the physics of non-equilibrium systems in general, which is not the case in equilibrium, where at least you have the Gibbs theory which tells you what to do or gives you a starting point. It doesn't mean that once you have written the Gibbs measure, everything is set, but at least it gives you a starting point. So maybe we'll have a break. But before that, let me give you a few examples of the kind of models we're going to look at. So 3.1. Microscopic dynamics. So now the systems are defined in terms of stochastic microscopic dynamics. We're going to make it very simple. This metal rod is just a liner with n-sites. And on each side, I may have a particle or no particle. And these particles are going to move randomly. There are many ways of prescribing the microscopic dynamics. I'm going to focus on a very simple type of dynamics today. And it will be this kind of dynamics which will be presented later in the afternoon. So how do we do? We have n-sites. Choose one site randomly. Let's say it's this one. If there's no particle, do nothing. Choose one another site randomly. Now it's this one. If there's a particle, then let it jump with probability p to the right, with probability q to the left. And then we're going to have some interaction between these particles. Imagine you've chosen this particle randomly. If it wants to jump here to the right, if there's another one, there's some exclusion relation which says you can't do that. And the jump is discarded. So that's the dynamics. It's called asymmetric simple exclusion process. Sorry, exclusion process. And at least I told you how my particles are going to move in the bulk. Now what are we going to do on the sides? Let's start with something very simple. If I choose, let's say maybe I'm going to add two sites here. And so what I said before is that I choose randomly one site and maybe I perform a move. If I choose one site on the side, let's say it's this one, then I will try to add one particle here if there's none. If there's one, I don't do anything. And if I choose this site and it turns out that there's one particle here, I will absorb it and remove it from the system. So with these very simple rules, we've built a dynamics where you have two reservoirs, one with a high concentration of particles which are going to put the particles in the system and another one which is going to remove the particles. In probability, we don't like too much to have discrete time. So I said that each unit time you perform something, it's a little bit better to have exponential time for some reason. So at random times now, you're going to perform some action. And these random times will be such that one site is updated per unit time. If the site is empty, OK, it's updated in a sense that it has been chosen. So that's more or less the kind of rule we're going to choose. And let me, before the break, make two remarks, or one remark at least, there are a variety of models when P is equal to 1 and Q is equal to 0. We shall hear quite a lot about that this afternoon. And it's called totally asymmetric simple exclusion process. And one nice process is when P is equal to Q is equal to 1 half, and in which case the particles perform random work. And it's called symmetrical simple exclusion process. And it turns out that if you forget, I mean, in all this system, if you forget about the reservoirs, so let me for a moment forget totally that I have reservoirs. And maybe I'll do something even simpler. When I have a particle here, it is periodic. So it can jump on the other side. So I don't have accumulation. So in the periodic case, what's going to happen is that in the periodic case, I have a very simple structure of stationary measure. The stationary measure is super easy. It's product measure. So the stationary or the invariant measure is very simple. It's a product measure, by which I mean at each site, i equal 1 to n, you choose randomly with probability rho, you put one particle at site r. Sorry, I should have given one extra notation, which I'm going to use later on. When I have a particle at one site, i, I will say that at each site, I have a variable eta i is equal to 1 if there is a particle, eta i is equal to 0 if there is no particle. OK, sorry. So now in the stationary state, I have just a collection of all these variables, eta i less or equal to n, and a stationary measure will be just eta i is chosen equal to 1. I don't know whether it's a nice way to write it with probability rho and equal to 0 with probability. It's a Bernoulli measure with probability 1 minus rho. Totally independent from one site to the other, and rho will belong to 0, 1. And so over there, in the periodic case, I don't have any reservoirs. And if I look at the case of symmetric simple exclusion process, where p equal q equal 1 half, turns out that this is a very simple Markov chain, a Markov chain, a very simple dynamics, and the dynamics is reversible. I don't have reversible. I don't have any flux. I don't have any reservoirs. It's in the periodic case. The system can transport in one direction or in the other. Everything is equivalent. And the dynamics is reversible. The stationary measure is product. You can't find anything simpler. When you just turn on a little bit the reservoir, when you add some flux of particle in your system, all that breaks. I have just no idea how to write simply the stationary measure. It's relatively complicated to describe. But again, I insist there's no simple form as a Gibbs measure written here. It's defined dynamically. There's no reversibility. And we would like to say something about these systems, about the stationary states, about the physical properties. So we have, if you agree, five minute breaks. And then we turn on to the next part of the talk, where I will try to give you some more details on this type of models. Thank you. OK. So we'll start the second part of the talk. And now I want to try to describe the evolution of this system. I won't tell you much yet about stationary state or anything. But I want to say how this thing evolve and try to see what we can compute. Over there, there is a parameter rho. And I said, OK, a stationary measure is a product in the periodic case. And rho is just the density. With probability rho, I have one particle. 1 minus rho, I have 0 particles. So just in the stationary measure, the system is equally distributed, totally flat, density equal to rho. And one may wonder, what is going to happen? For the moment, I'll forget a little bit about the reservoir. But what is going to happen if I start from distribution of this form of particle? Maybe more particles here, less particles there. And what does it mean? Microscopically, it means that I have a system of length 1. It's continuous. And it means that I have a density rho 0 of x, which is going to describe the system. And now I want to divide this into tiny, tiny, tiny little sites. And at each site, I will have a particle or not. So essentially, the mesh of my system is 1 over n. And for x continuous between 0, 1 or 2x continuous in 0, 1, I will associate actually the discrete variable at the discrete level, which says whether I have or not a particle outside x time n. Sorry, yes, x time n. And this at time 0 should be given or the expectation of that at time 0 will be given now by some initial measure rho 0 of x, let's say. I think it's correct. So in the following, it's a little bit easier. So I might write the expectation with brackets. So expectation of xn would be written like that. And now the question is, we have prescribed only some evolution rule. So I would like to know what's going to happen. If at some time t, I have one particle or not, I want to know what is the expectation at time t of having one particle at site i, basically. Which means that here I have a site i. And I want to see how this evolves. If I take the derivative, the rules a prescriber are going to tell me that with some certain intensity, this might increase because I have a particle at site i minus 1, which has tried to jump with some probability p to this site. So let's say that I will have a contribution p. I have a particle at site i minus 1 at time t. And I have no particle at site i at time t. And in this case, this will increase because the jump is allowed. So you can convince yourself that the rules I've explained are the same as this relation I'm writing. But the way to, I mean, you can do that mathematically. But to understand this, essentially, you have exponential times where particles or sites are chosen randomly. And then the jump is allowed with probability p. If this guy is full, so if this is equal to 1. And if this guy is empty, so if this is equal to 0. And on the other hand, I can have a jump in the other direction from site i plus 1 with probability q plus q. I plus 1 of t, 1 minus i of t. And vice versa. This can start to be empty if I have a particle at i, which is going to jump on the other side. So the way that my rule is choose one site randomly. So the first one you've chosen. In this case, OK. So the rules have given, then one thing I've said after is the way you're going to choose your site is some with random exponential time. And they can't jump at the same time. So that was slightly hidden because it's random exponential jump. In which case the first one is going to be allowed. It's going to fill the gap. And that's about it. So we're talking a derivative of a very short time. And now I have the opposite rule somehow, like with probability q eta i. There might be a particle at i and no particle at eta i minus 1. And with probability p, I can jump from i to 1 minus eta i plus 1. And I can write the rules of, sorry. So this is a minus. And maybe I should have done that. OK. And in this way, it's relatively easy to write the evolution of the expectation. And the question is, what do you do with that? Nothing. Basically, because once you've written the evolution of the density, you have to know the correlation between these two guys. And if you want to look at the evolution of the correlation between the two guys, well, you have three guys and so on and so forth. So it's a big mess. And I won't talk about that because we should hear much more this afternoon what to do for asymmetric models. It's much more complicated. Turns out that there's some miracle. If p is equal to q and r equal to 1 half. And the miracle is the following. So this, everybody is equal. So I have 1 half in front. And what are the correlations between i and i minus 1? It's a product. I have minus eta i eta i minus 1. And over there, I have minus i eta 1 eta i. But it's the same product. And this is a subtraction. So this correlation can solve. And it turns out that in this very special case, the derivative with respect to time is nothing but eta i plus 1 t plus eta i minus 1 t minus 2 eta i. And that sounds very good. Because I have closed equations. I have the expectation of the density of the particle at higher. And this can be rephrased in terms of the expectation of the particles. Only the expectation, different size, but only the expectation. And if you look at what disappears in this case, indeed, yes. So that's the beauty of the combinatorial of this system. And that's why we're speaking only about that today. We can treat nonlinearities, but it's much, much harder. One thing, of course, if you start from a density of this form, and if you look at the probability at xn, around this point, I'm going to rewrite this relation around the point xn. So eta xn plus 1 plus eta xn minus 1 minus 2 eta xn. What is it at time 0? So your discrete is a time. OK, at the moment, I want to do that at time 0. And so what I said, it was a bit obscure. The space is discretized. And I just want to show that this kind of relation is nothing else than a Laplacian, actually, a second derivative. But for those who may never have done that before, I'll just do it once. And I do it at time 0, because at time 0, we have something very neat. I know that at time 0, the expectation is given by a smooth function, which is written rho 0, which is written here. And this is nothing than rho 0 at the point x, so I forget maybe a little bit about the, it's something like rho 0 x plus 1 plus rho 0 x minus 1 over n minus 2 rho 0 x. And if you expand that Taylor derivative, what you see is rho second, the second derivative with respect to x. And at time 0, you can do it. And actually, it turns out that these equations are going to follow exactly the same pattern as the second derivative. Of course, I've forgotten something, is that here, in my Taylor expansion, I have a factor 1 over n square, just because I'm looking at small shift of order 1 over n. So what this is supposed to obey is the following equation, what we expect is that if we look at a certain time, so this should be like a second derivative of something in space with a pre-factor 1 over n square informed, that's what we are looking for. And if you believe to this equation, which can solve exactly, it's easier. What you can show is that rho at any point x times n, and at a time t, you can prove, if you look now at the evolution at time t of this guy, it will be very close to density rho x t that is going to go to 0 when n is going to infinity, and the density will obey the heat equation. Just you see the second derivative. So there is one thing I have to tune. But basically, to rescale, which is the time, but the point is that if you choose a density which follows the heat equation for x in 0, 1 for time positive, at time 0, this density should be equal to the rho 0 we started from. The microscopic system will remain very close to this density. So there are two things there which need to be adjusted. One is the 1 over n square in front of the second derivative. And essentially, if I look at a microscopic system with tons of particles, and if I look at one time step, nothing will happen. If one particle moves or n particle, there will be no transporter. I need to look at larger time scales. Space has been rescaled with the factor n. Time needs also to be rescaled in order to observe something. So here, space is rescaled by n, and time has to be rescaled by n square. Just to compensate this extra factor, 1 over n square. So now it's almost correct. I just have to add one more feature. I told you what's going on in the bulk. I didn't tell you about the reservoir. And the way we're going to prescribe the reservoir will be that they impose a density, a fixed density, at the boundary. So we'll say that, essentially, you can get a certain density rho a fixed. Doesn't move in time at the boundary, kind of Dirichlet boundary condition, and rho b on the side. In my first example, this density was simply rho a equal to 1. As soon as I can, I fill with one particle on this side. And there, as soon as I can, I remove one particle on the other side, in which case we had this type of boundary condition. But I can have reservoirs which are a little bit better, which are going to impose any kind of density I want. So now I have the heat equation. I have boundary conditions, which are rho a, sorry, on one side. I don't write it, but this has to be supplemented with a boundary condition, which are rho a on one side, rho b on the other side. This is my two reservoirs. And what's going on is that, essentially, these microscopic dynamics, after appropriate rescaling, can be described by the heat equation. Yes? Simply in the symmetric case. Sorry. By the average, the expression value is taken with a two-witch measure. How do we? OK, there it's, I didn't say much. So far, I just prescribed the mean. And you can think that initially, maybe, since the invariant measure was product, you may decide at each site to have, instead of a probability rho to be equal to 1 and 1 minus rho to be equal to 0, you can decide to have also a product measure, which is going to vary in space. So at site i, which is here, it means that you will choose the density rho 0 i over n. And at site here, you'll choose the density rho 0 i over n. So locally, now, it's product. Locally, the density varies. But it's a very good question, because the question now, what you're asking, is exactly what we are looking for now. At time 0, it was product. But what will it be at later time? I have no idea. And that's exactly what we're looking for. We can prescribe something initially. We have been able to describe globally what should be the expectation. Can we say more on the correlation? Can we try to get more statistical information on the measure? And that's all about the aim of these kinds of problems in statistical mechanics, non-equilibrium statistical mechanics. OK, one small consequence of that, again, only on average, if I have a reservoir density rho a here, I have a reservoir density rho b there. If I look at the stationary solution of the heat equation, it will be flat after a very, very long time. I'm expecting the system to be a linear profile between rho a and rho b. And I will call that rho bar of x, so this is x. And exactly the question you mentioned, over there, there's some fluctuations. There's a statistic around that, and we'd like to say more about it. This straight line is a constant flux. In this case, exactly. We have a constant flux. I'm going to define the flux later. It's a stationary state that flows constantly from one side to the other. Even so, we have a symmetric simple exclusion inside. The boundary are taking the system out of equilibrium. By the way, these kind of things in physics related to fixed law, namely that you have a constant flux, and the flux is proportional to the difference of the potential over there between the two reservoirs, or four yellow when you're talking about heat conduction. For those who are interested in understanding better this kind of limit, you may be interested in reading the book by Kip Nislandzimo. It's a very good book on the hydrodynamic limit. And for those who are more interested in the physical aspects, there's a very nice book also by Herbert Sponer, quite old on, I forgot the titles, but don't hesitate to ask me. And both of these books are describing the important feature in these systems. So now the question was, OK, there's a flux in this system. They are constantly particles flowing from one reservoir to another. So this is 3.3, let's say the current through the system. And how do we define the current? The simplest way is to sit in the middle on one edge, say between i and i plus 1, and to define qt of the h i i plus 1 as the number of jumps from i to i plus 1 during the time interval 0t minus the number of jumps from i plus 1 to i. So essentially, when there's one particle going from here to there plus 1, it might move back and cross in the other way minus 1. Ultimately, I'm just going to recall the number of particles which have been created on one side and which have been destroyed on the other side. It turns out that the counterpart of this linear density is quantified as follows. You compute the number of particles when time goes to infinity, take the limit when the time goes to infinity. And what you can show for the symmetric simple exclusion, it's proportional to rho a minus rho b divided by n. You may have some little correction after in n, but essentially at the leading order, it's proportional to the gradient, the inverse of the length of the bar. So again, that's Fick Law or Fourier Law. Usually, it's what we refer to. Of course, if I look at the system at a macroscopic level, the current or the flux is definitely related to the density, I'm going to draw that here. I have a macroscopically, I have a certain density. And if I look between x and x plus dx, I will have a certain number of particles flowing in my interval. And some which are flowing out. And the relation is that before I had a microscopic density which was given by eta i. And we could relate it to a macroscopic density rho. And for the current, there is also the analogous. So the microscopic current is also related to a macroscopic current such that the derivative at x, so small qxt, is the analogous of the instantaneous current, if you want, will be locally the number of particles flowing this way minus the number of particles flowing this way. This derivative, that's what tells me how the density moves around the point x. So this is a general law. The microscopic system, symmetric simple exclusion, doesn't play any role for that. This will be always satisfied. However, if I supplement it with this computation, which is the Fick law, it tells me that the current locally should be proportional to the gradient of the density. There's one half here. So the current locally should be proportional to the gradient of the density. And therefore, macroscopically, what do I expect? I expect that q of xt should be related to minus one half derivative at x of rho. That's macroscopically. Indeed, when the derivative is negative, you have a flux of particle going in this direction. What is, of course, not trivial at all is the one half, because that comes from the microscopic system, and that might be complicated to get. But if you use both equations, you will recover the heat equation we had before. And so we are on safe ground. That's how these two things are connected. Well, now this is macroscopic, but actually locally I have fluctuations when I observe a density like that. In fact, locally, the microscopic system is producing tons of fluctuations. So what you want to recall are these fluctuations. The expectation is nice, but doesn't tell you much about the fluctuation. What you want to do is some local averaging in order to understand what is this small yellow curve. So around x, I'm going to open a small window. I don't tell you exactly what it is or how it scale. But it's small with respect to n. So in terms of macroscopic size, it's small, but it takes into account quite a huge number of sites in order to have some averaging. And there, it's going to record locally the fluctuation. So you can imagine that rho, which is a priori the macroscopic density, can be approximated in terms of, let me call it, rho hat n at x and t. And it's nothing else than a kind of averaging of particles in this box centered around x with maybe a size, OK, let's say n to the power delta. Delta should be less than 1 around x. So I will take an average of all the particles in the box divided by the size of the box. And this is now a random variable which fluctuates with time. And this should record some feature of our system. And since I'm able to do that for the number of particles, locally I can do also a kind of time averaging and record the fluctuation of the current. So I will have two quantities now. Maybe I should write it here. I will have two quantities. One is the local fluctuation of the density. And one is the local also fluctuation for the current over a very short amount of time. And it's more delicate to prove than this CRM. But the equivalent of this CRM and of the thick law is to say, well, when the size is large, the probability that this rho n hat and this q hat n are closed in a sense to be precise. But I'll just tell you of the q and the rho, which are defined here, the macroscopic ones. And this over a time interval 0 t, macroscopic time interval, I rescale a time of this form, n squared tau. I rescale space with n, i over n. And being closed means you can find a small corridor around your macroscopic profile where the fluctuation sits. The probability that this is true goes to 1 when n goes to infinity. Well, that's good. We are trying to approach some understanding of the statistics of the model. We're trying to get some feeling about the fluctuations. What is amazing and which we would like to say is that, in fact, these equations are almost satisfied also for n finite since rho hat n is closer to rho and q hat n is close to q. So the first equation is essentially, by construction, you have to have that just because the quantities are conserved, what goes in goes out. The second one is a little bit more delicate. It tells you that, locally, the flux is proportional to the gradient of the density. However, you have some fluctuation because it's a microscopic system. So what you will find is a white noise at x t and a small parameter for the variance of the white noise. Let me call it sigma, which is rho hat n. And this sigma, in the case of symmetric simple exclusion, sigma rho is rho 1 minus rho. So you can compute it. That's the beginning of non-linearity and the end, actually, because we have absolutely no idea how to make sense of that. Part of the problem is this is very irregular, this white noise in time. And therefore, these things become distributions. And so putting that in the non-linearity is not really easy. So this connects to what we might hear this afternoon by Milton Jarrah. And I just want to comment briefly on the fact that there has been recently a Fields Medal attributed to Martin Eyer for trying to deal with these kind of equations. So if I remove the q, we'll get the equation of the form equal 1 half laplacian of rho hat n plus 1 over square root of n sigma rho hat n, a derivative, not good, w x t. Oh, OK. So this kind of equation, we don't know really how to make sense of them. But we'll see that they are a great source of inspiration. And we'll see we can use their interpretation for large deviations. So let me just add a few names towards, so far in probability, the understanding of that is far from being settled. I just want to mention the work by Ira and Gubineli and Keller and other people about trying to make sense of that. And Milton Jarrah has been also working not on this equation, but related equation, which we might hear this afternoon about maybe not. We'll see. So that's more or less the kind of things we would like to do is not only describe the hydrodynamic limit, but try to recall the fluctuation in order to say something about the statistic of the system. And it turns out that the large deviations are much easier than trying to justify that. And this is the last part now, 3-3, the large deviations. So what we saw so far is that the probability of observing the typical path, namely a path which follows the heat equation with the appropriate boundary, that is going to 1. But I can decide to say, OK, maybe I'd like to observe something which is not the. It's a low of large number. It's a low of large number, except that now it's seriously correlated Markov chain. And there is some appropriate time rescaling. But indeed, it's exactly that. So low of large number, there are many issues related to central limit theorem, or the analogous of central limit theorem. We are going to squeeze that and go straight to the large deviations, which turn out are easier to handle because you don't have to make sense of these singular objects. Now the question is, OK, instead of observing this flat profile, maybe I want to observe a certain profile with a certain evolution. So maybe I should have called this guy, Robar, is also a kind of typical evolution. And now I want to see a different horror, which has nothing to do with the typical evolution. And I want to say that I'm close. I will remain, in this unprecise sense, I will remain in a small tube around this atypical profile. Of course, if I choose any, OK, any might be a little bit excessive, but choose any different evolution, rho x t, than the robar. And you have also to choose the corresponding q x t. And you don't have much. Actually, you don't have much room because they need to be related by this relation. So essentially, if you have q, you have rho. Sorry, merci. OK. So choose the couple. And the question is, OK, what is the probability of observing a large deviation, namely that rho hat is closer to this atypical rho? The probability that rho hat n, q hat n, is close now to this stranger over time t, microscopic time t, is close to that. How can I understand that? Imagine I strongly believe. I don't know how to define it. But imagine I strongly believe to this equation, in which case the only way I have for creating some atypical trajectory is to have a white noise w, which is going to produce that. So maybe the white noise, I will find a very strange white noise, which is going to lead to this equation. So the probability of observing that is actually the cost of observing this white noise over a certain time, over space. And the w, now I'm writing, is not any w, but it will be, you see, it's a Gaussian variable. And it will be exactly the noise which is going to produce the deviation there. Doesn't make sense, doesn't matter. But the point is that I can invert the equation. So the noise needed to create this deviation will be such that if I look at the second equation on the left, is w x t should be equal to square root of n of what? Of q at n plus 1 half dx rho at n divided by sigma of rho at n. So that's more or less what the second equation tells me. That's what the w should satisfy. I replace this condition. And sorry, there's a square root in the sigma. Sorry, I should have. It's a variant, so there's a square root over there. I can't reach it anymore. But over there, there should be a square root here. Sorry, it's just a noise. So here I have a square root of sigma of rho. And I'm going to replace that in this equation. And if I replace it, what can I prove? I can prove that the probability of observing a trajectory atypical close to rho is equal at the exponential level of minus n integral 0 t integral 0 1 dt dx of what? Of q x t plus 1 half dx rho squared divided the Gaussian variable by 2 sigma rho. So what I did is that I strongly believed in this equation. I said the noise should help me to reach this new state. This new state is actually related. So it's rho, there you have q, there you have the derivative rho. And I plugged it without any fear, hoping that it would give me the large deviation and the decay with respect to n. So now it's not independent variable. It's strongly correlated Markov chain over a very long time. But what you get is a functional where you have both the current and the density, which are linked. And the hope, of course, is that by investigating this kind of functional, we should be able to understand some analogous to the free energy or some properties of this non-equilibrium system where there is no such thing as free energy a priori. So one can completely prove this statement without resorting to fluctuating hydrodynamics as I did before. It's just convenient for intuition. There are rigorous mathematics behind that. And one can really make sense of this. Again, the intuition is that the current, mean current, is closer to something related to the gradient. You have locally Gaussian fluctuation. And the sigma rho depends, again, on the model and is a variance of the current locally. Now, what are OK? You have no special hint on what is sigma of rho. Or it depends on the model. It totally depends on the model, including the one half. What is interesting? And I've been focusing on very special dynamics. But for general microscopic dynamics, possibly also more complicated than the one I described, if they are of diffusive type, like symmetric simple exclusion, you have some, I mean, when you have P and Q, you have a systematic flux. But if they are of diffusive type, you need only to compute two quantities, d of rho, which in our case was one half, the diffusion constant, and sigma of rho, which is the variance of the system. In our case, it was rho 1 minus rho. But otherwise, you can replace one half by d of rho, sigma rho by something more. And you can have access to a wider range of model. But if the model you start, it's explicit. It's explicit. It's actually rare to have explicit functions. And now, OK, I think I have only two minutes. So I just want to mention certain things which can be done. In a sense, once you have a large deviation functional, you can start asking questions more relevant for physics. So what kind of question you may want to ask? We're in non-equilibrium. Very important characteristic of non-equilibrium model is the current. So before the free energy was encoding the fluctuation of the total magnetization, I guess that the current will be now a relevant parameter. So maybe I can look at the large deviation of the current. What is the probability of observing a certain current different from the mean current Q bar? But no, it's a constant. So let me call it alpha. Over a very long time, I'm sitting in the middle of my system. And I will look at a current over a very long time. What we saw before is that with high probability, it should be close to rho a minus rho b when time goes to infinity. What I want to say is that, so in particular, in our scaling, that goes to 1. But maybe now if I want to incorporate the current as a statistical parameter in my theory of non-equilibrium, I can now decide, OK, what is the probability of observing a different current with a value alpha? And through this large deviation function, you can prove that this probability at the exponential level, so the limit when n goes to infinity, 1 over n log n, you can prove that it's related to this quantity. And if you want even to rescale with respect to time, because now it's depending on time, you get that to realize this atypical current, you have to minimize over function rho and q such that, sorry. OK, you have to minimize over functions rho, which are now only depending on space. That can be proven, densities. And you compute, just to give you a concrete example, 1 half derivative of rho square divided by 2 of sigma rho of x. So now, for example, if you want to observe a deviation of the current, you can reduce this pretty complicated variational principle, which depends on time and space. Need to prove it. But you can reduce it to a variational principle, which only depends on the profile. And behind that, what we can be, so alpha is there. So now, the idea is the following, and I'll just conclude with that. You have a system. Let's say that the boundaries are such that the system is close to, the density is close to 1. So both boundaries are equal now. Let's say that's rho a, and it's equal to rho b. Both density are equal to 1. I want to increase the current to have a huge current close to alpha now. And when I have a lot of particles, it's hard to propagate a current, because if I try to jump, I will find another one. What this variational principle tells you is that there's a coupling now between density and flux. And a coupling such that the optimal way of producing a current alpha is to modify the density to bend toward a density close to 1 half, actually. 1 half being the maximum of rho 1 minus rho r. And to bend the density, it's like a pipe. You have a small pipe. You try to enlarge it. So there, you modify the density to go to region where 1 half is one particle, no particle, one particle, no particle. And therefore, a lot of opportunities to move. So what is hidden behind these large deviations principles are potentially some relation of physical interest between the different parameters which are popping out. We had no prescribed theory. We had actually no idea originally what to do. And the large deviation principle provides a guide and a framework to explore that. I won't go further. I just want to maybe to shoot a quote, a few names to explain this large deviation principle. Originally, we can find large deviation in the work by Kip Nisola Varadana. There are tons of works on large deviations. This idea to look at the current and the large deviation have been exploited extensively by Bertini, Dessolay, Gabrielli, Yonalesinho, and Landime. And we will hear really a lot this afternoon by Kiron on the large deviations for much more complicated systems, which are non-diffusive. Now P is different from Q. And it's different scaling, different perspectives. I should mention also the name of Bernard Derrida, who has been working a lot on all these aspects. And this is also related to a joint work I did with Bernard some time ago. Now the large deviations, just to conclude, I hope I convinced you that within the large deviation theory, there are some potential applications to physics. There are many open questions related to this type of model. There are also other systems, for example, glassy systems, where you believe that the glass structure is something purely dynamical. So you want to explore the thermodynamic aspect of the glassy system by trying to understand dynamically the large deviations. So this has been investigated. There has been attempts also to use in different non-equilibrium contexts like granular materials. And there's a whole bunch of different non-equilibrium systems, which could be sometimes not the solution to everything, but which could be investigated through large deviation. And the hope is to discover some general physical feature or certain things. For example, and that's really the conclusion, I promise, this large deviation of the current, they satisfy a very general relation known as a galavoticoensymetry, which is totally independent of the microscopic system. It tells that there are very specific properties of this large deviation of the current. And they've been applied in different contexts. And I guess Kiran applied it also to molecular models, so more biological issues. And you may ask Kiran Malik at the break. Thank you very much. Any questions? This lecture? So what would be the logarithmic plastic form here, so the convex duo of the function? Well, in the very last example. In this one, it's what you think. So it's expectation of the exponential lambda times the current. You try to fault the current by very much like what we did in the independent case. After to compute it, it doesn't help. But lambda is just a real problem. Yes. And so it would be the conjugated parameter to alpha. Yes, you have to tune the lambda to reach the alpha. The point is that the weights, but over there, again, what you have indeed, what you write, you have to unfold. Somehow, looking only at the current is not the right idea. Because what you see is that it's strongly coupled to the density. And both are playing together. So at the first sight, you try to compute that. But then you soon realize that it won't be as easy as an independent system. And you have to put the density into the game and so on. That's why somehow the large deviations with this respect are easier to handle, I think, than just looking blindly at this type of things. But the equivalent. Thank you.