 Hello. Good afternoon. Thank you for staying. I would like to thank Nicola and Stefan for their invitation. It's a big honor to be here. French, English? Anybody doesn't speak French? Okay, so English. Or Hindi maybe. Okay, so I'm going to talk about some of the concepts that we learned about this morning from Thierry. How to use them in physics. I'm a physicist. That's why I'm using PDFs. And just to match, to make an asymptotic and smooth matching with what Thierry said this morning, I will just scan very fast through the first transparent, which just in fact say again the same things, but in less details that we heard this morning. We know that when we do equilibrium statistical mechanics with a reservoir at a given temperature T, or beta is 1 over T, we have a prescription to study such systems. And the prescription is given by the canonical Gibbs, Boltzmann law, Maxwell law, whatever, call it whatever you like, which tells you that underlying system and statistical mechanics, there's a measure, the Gibbs measure, which has the temperature as an important parameter, and that this measure is normalized with the partition function. And the partition function is in fact nothing but an avatar of the free energy of the system, the thermodynamic free energy up to a log and a KT. And once we know the free energy, we can study the phase diagram, understand how things evolve when you change the temperature or some other parameters such as the pressure or the particle density. But of course statistical mechanics is much stronger than thermodynamics because it predicts fluctuations. You have not only average values, but you have variances, fluctuations, a real probability distribution, and indeed, Brownian motion is the paradigm of fluctuations, pure fluctuations, and Brownian motion goes beyond classical thermodynamics. You cannot predict it using the first and the second principle, but of course you can understand it very well using equilibrium statistical mechanics. This is what Einstein did in 1905. Okay, now if we go out of equilibrium and consider the simplest picture that Thierry already drew this morning, but I think everybody in the community draws the same picture, the big difference can be that you can put circles or squares here. So if you take two reservoirs at different temperatures, chemical, electrical potential, and you make a contact between them through a rod of metal, and you wait long enough, there will be a persistent stationary current going from the high temperature to the low temperature, and just to describe this very simple everyday situation, no microscopic theory is yet available. So in fact we don't really know what are the relevant parameters, P, V, T in thermodynamics here. We don't really know what are the relevant parameters to describe such a system. Certainly we should put the length of the rod, the two temperatures on the boundaries and so on, but what else? Not only we don't know the parameters, but we don't know which functions we have to study, the analogous of entropy, free energy, enthalpy and so on. So we don't know which functions to put and which parameters to put into the functions. Nothing is really known about universality. This is very important in equilibrium statistical mechanics. Here we don't really know how to separate universality classes, but in fact there is no Gibbs measure or something, a general form for a microscopic measure that would underlie such a simple model. So if the two temperatures are equal, we know everything. We know there is a Gibbs measure. If you take them different, we just don't know how to write anything. And we don't know the fluctuations. So the important thing is that when you wait long enough, there is a microscopic current flowing from high temperature to low temperature, and the system is out of equilibrium because this flow of current breaks time reversal invariance. If you take a movie of this phenomenon and you project it backwards, you will be able to say that something is wrong in your movie because heat would be flowing from low temperature to high temperature and you know that it's not possible in real life. So let's talk a bit about non-equilibrium fluctuations. Again, we heard about it this morning. So here I put the low temperature here and the high one here. We know that the density profile or the temperature profile in the steady state given by Fick's or Fourier's law will be a straight line on average. But there can be fluctuations in this profile, fluctuations drawn in red. We don't know how to compute what is the likelihood of any non-typical profile in this framework. Of course, again, at thermal equilibrium, if both temperatures or potentials are equal, let's think about the gas in a closed room at a given temperature, then the typical profile is flat and the fluctuations can be computed. In fact, Thierry did it this morning and we can say that the likelihood, the probability of seeing any density profile, rho of x, stationary profile, takes a large deviation form. So these probabilities of the form exponential minus beta, the volume here is the length of the system, times some functional of this density, rho of x. And this functional at equilibrium is nothing but something very closely related up to an integral to the free energy of the system. So the free energy is, in fact, nothing but something that quantifies large deviations at equilibrium. Free energy can be viewed as a large deviation function at equilibrium. But again, out of equilibrium, we don't know what's going to happen. So what is the probability of observing the red profile in the steady state? What is the corresponding non-equilibrium free energy, f to f gothic of rho of x? We don't know a theory. How to compute that? We don't have a principle for that. Similarly, similar question for fluctuations of the current. If we count the total number of particles that have gone from left to right in a given time, let's call that y of t, Thierry called it q of t this morning, if I remember well, and we take y of t divided by t in the long time limit. This will give us the typical current that flows through the system. This typical current is given by Ohm's law, if you want. u is equal to r times i, r times j. But we want to understand what are the fluctuations around this typical current. Or more precisely, what is the likelihood, the probability of seeing an empirical current, so I measure how many charge flew from left to right, and I divide by the total time is equal to a small j, which is not the typical current. And this takes again a large deviation form. So it's a function minus t, the time phi of j, and we want to compute this large deviation function. So the general question would be, and again we saw this at the end of Thierry's lecture this morning, what would be the probability of seeing a local current, jxt, and a density profile, rhoxt, during a certain range of time between 0 and capital T, with the correct scaling, the diffusive scaling. We heard about this morning again, and this probability will take a large deviation form with a rate or large deviation function capital I of j and rho. And what people are looking for is a kind of principle to compute this rate, this large deviation function capital I. So there is no general principle yet, but if we had one, if we could compute this capital I, of course by contraction, by taking marginals, we could compute the two important physical quantities, capital F and phi, that I just defined before. So one part toward the answer to this very general question is the macroscopic fluctuation theory, which is okay for a driven diffusive system, but which is not totally general. And just I recall you the variational principle we saw. So this rate function can be written as a solution of variational problem. This is an Euler Lagrange theory, and the only thing you need to know from the microscopic dynamics, again this is what we learned this morning, is this conductivity sigma and this diffusion constant d. This morning d was equal to one-half, and sigma was equal to rho times one minus rho. But if you take a general lattice gas, they can have much, much more complicated expressions. In general we don't know how to compute them, and we have to compute them really starting from the microscopic dynamics. So this framework, so maybe that's not the name used in mathematics, but I will use the acronym MFT, Microscopic Fluctuation Theory, to describe it when I alluded to it during the talk. And it is due to many people and include among them the wrong group, Martinis de Soleil, Gabrielli, John Alasigno and Nondim. So that's the end of the introduction and the matching with Thierry's lecture this morning. I'm not going to go into this direction now, I'm going to go to a different direction because what happens is that if you really want to solve this problem, you have to solve some nonlinear PDEs, and I just don't know how to do that. So what I will be telling you about is how to get some exact solutions for some very simple discrete models, the exclusion process typically, using integrability. Integrability is a concept which comes from quantum mechanics. So I'm going to spell out examples and to give you some formulas that were obtained for some specific models for these kind of general questions. So let's start again with the general picture, and this general picture can be modeled by the asymmetric simple exclusion process. This lattice gas, discrete space, continuous time Markov process where particles hop from a given site to its neighboring site by respecting the condition that you have at most one particle per site. This is the exclusion principle. So this hop is forbidden, that's a hardcore interaction. And the reservoirs on both ends are just here to put in particles or to extract particles with different rates, alpha, beta, gamma and delta. So you can adjust the values of these four rates to mimic any boundary density you wish. Okay, no questions up to here? So let's start to look at this model more precisely. So this model is a kind of minimal model for non-equilibrium statistical mechanics. It plays a role analogous to the Ising model in equilibrium stat mech. So many people are studying it, thousands of papers have been devoted to it in the last 20 years. And just to emphasize again, this exclusion brings in some interaction. So it's a non-trivial end body problem. It's not a one-body problem. The asymmetry drives current in the bulk. So together with the reservoirs, these are the two features that keep the system out of equilibrium. And the fact that it's a Markov process, so it's genuinely stochastic, prevents you from using any Hamiltonian and trying to adapt any kind of Gibbs measure. There's no Hamiltonian, no energy. So how can you even try to write exponential minus h? There's no h anyway. So everything is encoded in the mark of the generator. So in this evolution equation for the probability density function, and that plays a role, in fact, of the dynamics. That's the dynamics of the system. And all the information is encoded in the generator. So just to recall you, this very nice mathematical model was not invented by mathematician. In fact, in the mathematical literature, it appeared in Spitzer's papers in the 70s. But two years earlier than that, in 68, 48 years ago, people working on biophysics invented this model to understand how ribosomes read messenger RNA and, reading it, built up proteins from the genetic code. So this is really the origin of this model. So I'm not a biologist, I'm not going to talk about that at all. I'm going to say wrong things. But this is a true photo of these ribosomes, proceeding along the RNA strand and building up proteins. So this has something to do with reality. It's not purely... Yeah. Anyway, it's a minimal model, so it appeared in many, many different contexts. Reputation of polymers, conductivity, hopping conductivity, driven diffusive systems, Gardar-Paris-Eason equations. And it's still very much used. For example, the nice example that people like to say, although I know of anything, nothing about it, is that the traffic in Geneva, in Duisburg nowadays, is implemented in real time using an avatar of the exclusion process. So it's useful. One important connection between the exclusion process and a very famous stochastic differential equation is the fact that the exclusion process is a discrete version of the Gardar-Paris-Eason equation. So the Gardar-Paris-Eason equation describes how a height h evolves in space and time because of diffusion, aggregation, or evaporation of matter, of particles, and random noise. So this describes the evolution of a random interface through various processes. I will give you some examples by the end of the talk. So this is a continuous space-time stochastic partial differential equation, which was given meaning by, for example, by Martin Erard's work in the last few years. But if you discretize it, so if you consider that your interface is just this small, this black line with slopes plus or minus one, and you suppose that this interface evolves either by adding losanges or by removing them, then I will not go into the details, but you can very precisely map the evolution of this interface by an exclusion process. The idea is that slope minus one corresponds to a particle, slope plus one corresponds to a hole, and when a particle hops, the interface evolves exactly in the way of adding to it a small rhombus. So it's just perfectly equivalent. So many of the results that are derived from the exclusion process can be pushed into the Kpz word and tell you things about Kpz. So now let's go more into the details and to the exclusion process. If we want to study it, we have bulk dynamics, but we have also boundary conditions, and the mathematics and the physics is very different in the different cases according to the boundary conditions. The simplest case is the periodic case. Thierry told us this morning that at least the stationary measure is trivial, it's factorized, it's flat. Maybe the more realistic case is the one with open boundaries, a finite lattice with open reservoirs. This is really the rod connecting a battery to the earth, for example. Another very nice and maybe the mathematically more fascinating case, most fascinating, is the infinite line case, where there's plenty of beautiful things. I hope to reach the third part of my talk and tell you a bit about this case. So in each sub case, the techniques and the results are a bit different. So I'm going to start with the simplest one, the ring case, then go to the open boundary case, and hopefully have time to tell you about the infinite line problem and its relation with random matrices. So let's start with the exclusion process on the ring, and I want to tell you a little, but not much, unfortunately, about the Bethans arts. So if we write the dynamics of the exclusion process on the ring, and we spell out what the Markov generator is very precisely, well, we just say that the configurations are just given by the position of the particles on the ring, and the Markov generator is a kind of discrete Laplacian. So there is an asymmetry, P and Q, so here I just rescaled things. Particles can jump in the trigonometric direction with rate 1, and in the antitrigonometric one with rate x, and x is less than 1. So I have a kind of discrete Laplacian operator which plays the role of the generator of the dynamics, but I have to be careful that some configurations are forbidden, so of course I cannot put two particles on the same site. So there's one way of writing the evolution operator of my exclusion process on the ring. So what I want to know is first what is the stationary solution of this equation, what is the stationary probability distribution, and this is a simple exercise. The number of configurations is just given by the binomial, L sites and particles, and in fact the stationary distribution is just that any configuration has the same probability as all the others. It's a flat measure, and the stationary distribution is just given by the inverse of the binomial. This is just another way of saying that there's a factorization in the grand canonical ensemble. But this is not enough. If we want to understand the dynamics of this system, we need to know about the eigenstates of this operator. It's a linear equation, so the best we can think about is to be able to diagonalize fully this operator. So at least what we know is that the density profile is flat on average, and there are Gaussian fluctuations because everything is completely flat at the level of the stationary measure. However, the current is non-trivial, and I will tell you how to investigate the properties of the current. So as I told you, we are interested in diagonalizing the generator of this Markov process, which I call M, and one first very beautiful observation, which I think was due to Deepak Dhar at the end of the 80s, and then spelled out by Spohn and Gois, is that this generator, M, is something which is very familiar to physicists, and maybe much less to mathematicians. This generator is nothing but a spin chain. So you can really write this problem as a problem of a quantum spin chain with Pauli matrices. So I imagine that people who are classical mathematicians, maybe for them it's not completely clear what it is, but for a solid state physicist, it's a kind of, it's a Graal. When you see, when he sees a spin chain, he's happy, and he knows that he has almost 100 years of knowledge about it, 80 years, and can use plenty of methods to investigate this problem. So the exclusion process is a quantum spin chain in disguise, which means that it's a magnetism, a quantum magnetism problem in one dimension in disguise, and there are, again, many techniques to try to understand it. And so in 19, so this kind of quantum spin chains were invented by Heisenberg in the 20s, in the late 20s, and the first solution to this quantum problem was performed by Hans Bethe in 1931, and he invented the so-called method of better odds to solve this kind of quantum spin chains. So what is the better odds about? Better odds is a way of diagonalizing this matrix, and if there were no interactions, if there was only one particle, you would have one particle on a circle where you could use Fourier. You could find plane waves to diagonalize your system. If the particles were independent, you would just say that the eigenvalues of your system would be just product of plane waves. Again, independence. Well, what Bethe tells us is that for some classes of very special systems, which are, in fact, the systems that one can integrate, can solve, so in fact all the systems that we can know, that we know how to compute with and integrable systems, they have the properties that even though they are interacting non-trivially, their eigenvalues can be written as a linear combination of plane waves. That's the heart of the better odds. So of course, most of the systems are not integrable, you cannot solve them using this method, but all the systems you know to solve are, in fact, somehow, in disguise, solvable using the better odds. I think model, six vertex model, or even classical models, they are, in fact, another part of the better odds. So the idea is to look for plane wave solutions for the eigenvectors of your evolution matrix. And this works here. So here, instead of writing exponential i k x i, exponential i k, I call it z. So you have to use linear combinations of plane waves with these wave vector z's. And the z's, plugging it back, this ansatz into the eigenvalue equations, the z's have to solve a system of algebraic equations. So it looks a bit, it may look a bit rapid. The thing is that the matrix you want to diagonalize is a huge operator. The size of the operator is the size of the configuration space, which is 2 to the power L, typically, more precisely the binomial ln. And the better odds tells you that you have to look for L eigenvector, L fugacities, z i's. So you go from 2 to the power L to L that satisfy some nonlinear algebraic equations. So it's a huge simplification in complexity. Just to give you an example, if you want to simulate a system of size 10, your Markov matrix is 124, it's big. You may try to diagonalize it exactly, you may go up to size 20 at most. It will be of order million. But these equations, they involve L variable, where L is the size of the system. So you can very well solve them up to a system of size 150. But 2 to the power 150, you will be never able to diagonalize it. So you go from exponential complexity to linear or quadratic complexity, and that's the key of the solvability of the model. So by solving these equations, and the beautiful thing is that in the case of the exclusion process, these equations, which look not so nice, can even be solved explicitly in certain cases. Because the roots of these equations lie on some nice curves, which are called the Cassini ovals and so on. So you can really extract purely analytically some results. And you can compute at the tower of the eigenvalues of your Markov operator, at least the most interesting ones, which are close to the stationary state. So the ones which decay with a longer time, which decay the slowest. And you can classify all excitations of this operator and make a complete spectral analysis of your Markov operator in this case. A full spectral, even you can even reconstruct from many initial data, which are not completely random, but for many interesting initial data, you can decompose them into eigenvectors and do the full evolution thanks to this exact diagonalization of your operator. So that's, in some sense, you can fully solve the problem. So this allows you in particular to compute the relaxation time and to see how it scales with the size of the system. So this exclusion process, they are non-diffusive, typically, as long as they are not symmetric. And the relaxation scales like the size of the system to the power of 3 over 2 and not L square, which would be the case if they were diffusive. And you can even predict some oscillations, some waves. I mean, again for the physicists, there are plenty of things that you can compute compared with numerical simulations and there's a lot of phenomenology underlying this simple model that you can really go for analytically too. Okay, now I want to tell you how to calculate and what are the results for the statistics of the current. So here we don't have reservoirs. So we cannot calculate how many particles went from left to right. It's not a problem. We can just sit somewhere on the lattice and count how many particles jumped from i to i plus 1 during time t minus the number of particles that jumped from i plus 1 to i. So there's a local current and I recall yt. So sorry again, this was qt this morning. So this is yt this afternoon. Probably Milton will use another notation even. And we want to know the statistics of this yt. So ideally we would compute the distribution of yt, but we know how to compute its Laplace transform. We know how to compute the characteristic function, exponential mu yt in the long time limit. So what we want to compute is the average of exponential mu yt. So if we formally take an expansion with respect to mu, we'll have all the moments of y. And what you can prove rigorously, even for a physicist, is that in the long time limit, this exponential average behaves like exponential e of mu times t. So e of mu is nothing but 1 over t log of the average of the exponential. So if e of mu is nothing but the cumulant generating function of your random variable yt. So we know how to compute this e of mu. This is what we want to compute. And how do we compute it? Well, the idea is that you see this is a purely probabilistic problem. e of mu is the cumulant generating function of a random variable. We want to compute it. And the beautiful trick, which goes back to Dransker and Varadan, is that you can trade off this purely probabilistic or statistical question into an eigenvalue problem. And the idea is the following, is that there exists a deformation of your generator, of your dynamics, that I will call mu. I'm going to explain that slightly later, what is mu. But there's a deformation of your generator such that this function e of mu is the dominating eigenvalue of your operator m mu. So somehow, the quantity you want to compute is nothing but an eigenvalue of an operator. So you have traded off a probabilistic problem into an eigenvalue problem. And there are plenty of tricks to compute eigenvalues. So to be more precise, the way that you deform your generator by this factor mu is that you want to compute the particles that are hopping between site i and i plus 1. So I decided to put an enhancement factor, exponential mu, to all the jumps that occur from i to i plus 1. So I call m plus the part of the generating operator of the generator that makes a particle jump from i to i plus 1. And you put a factor exponential minus mu to the jumps of particles between i plus 1 and i. So you deform your dynamics by putting these two fugacities, exponential mu and exponential minus mu, locally, where you want to measure the current. And you construct this new operator from the generator itself. And this new operator is such that its dominating eigenvalue is nothing but the cumulant generating function. So now you want to compute an eigenvalue. And the nice feature is that even after deformation, the new model that you obtain is still integrable by better answers. So you still remain in the class of solvable models. And you can still use this technique of better, which was invented for spin chains, to solve this operator, which has now less to do with spin chains. But anyway, the trick still works and allows you to compute the full spectrum of this matrix m of mu. But you don't care about the full spectrum, you just want the dominating eigenvalue. So this can be done. And I just tell you just in a very sketchy way what the solution looks like. So we want to compute this function e of mu. And this is usually the case in all these kind of problems. We never get e of mu. We get a parametric representation. We get e as a function of a parameter b, mu as a function of a parameter b. And somehow, at least formally, one has to eliminate b between the two equations. So we get mu as a function of b, as a series in b, and e as a series in b. So this series contains bk over k, bk over k, and some coefficients, ck and dk. These two coefficients, I mean these two families of coefficients in k, they are combinatorial numbers. They have some combinatorial interpretations in terms of trees and forest and things like that. But it's not so important here. The thing is that we can compute them as residues. So there exists a function phi k such that if you take a small contour that is in circles 0, ck is a residue of phi k, and dk is a residue of phi prime k at minus 1. So if you know phi k, yeah, I learned complex analysis from you a long time ago. So it's still useful for me. I hope I didn't say anything wrong. So you can compute ck and dk using function phi k. So if I tell you what phi k is, you know how to compute ck and dk. So the information is in fact contained in phi k. Well, we can wrap all the information together, use a generating function, and say that the full information of the phi k is in fact embodied in function wb. So the object which is important to compute this cumulon generating function and so on is this function wb. So now I tell you how to compute this function wb. Then you can just unfold everything and get the cumulon generating function. Well, this function wb is a solution of a very nice equation. Again, just look at the structure and not the details. Wb is a solution of a self-consistent equation which contains a log b itself, a linear operator with a kernel, okay, and a prefactor which is a function which is a simple rational function. So this is a kind of general structure. Wb will be a solution of a self-consistent equation with a kernel. And in fact, this kernel is not a arbitrary kernel. It appears in a lot of combinatoric works, especially in the calculation of partitions by Andrews and Ramanujan. So these are typical objects that you see in combinatorics. And this is a simple rational function. So I don't tell you how to solve this equation. This can be done. And if you do it explicitly, you can, for example, in some simple cases, when backward jumps are equal to zero, obtain explicit formulas. So that's important. I mean, up to now, you could have thought that this is just, I mean, waving hands and abstract. But in the end, you get explicit results. And you see that the coefficients in this simple case is very complicated-looking ck and dk. They are nothing but binomial coefficients. So it's not a big deal. But now, using the function e of mu and what do you say? Eliminating b between them, you can compute e as a function of mu and get the average value of the current, its variance, and so on and so on. You can even go to the full system and reconstruct the large deviation function of the current. And as, again, this morning, Thierry draw, the large deviation function of the current has typically a kind of well shape, but it has some nice physical features. It's asymmetric, okay? The important features for physicists are the way it vanishes around zero, the quadratic or not vanishing. Here it's quadratic, and this allows you to compute the fluctuations, the variance of the current. And the other two important features that we look for in physics are the tails. So the left and the right tail, so the asymptotic behavior of this large deviation function. And as you see, it's highly asymmetric. So it's not so easy to predict the five halves and the three halves, but at least one thing is clear is that it grows much faster to the right than to the left. And this tells you, in fact, that in these simple systems, it's in fact much easier to reduce the total current than to increase it. And the reason is simple. If you want to reduce the current, this is trivial. Well, you just have to have one lazy particle. If one particle suddenly decides not to jump anymore, it's going to block everybody else, and the current will drop down. So it's just one guy can just prevent you from progressing. But if you want to increase the current, then all the particles have to be very active and start jumping very fast. And this is much less likely. And this is, at least qualitatively, the reason for the very strong asymmetry between the left and the right tails of your distribution. Another calculation which can be done explicitly is that you can go to the weakly asymmetric limit. Weakly asymmetric means that particles just almost jump with rate one and one. And the difference between the left and right rates, one and X, X goes to one. And the difference is in one over L. So it's one and one minus nu over L. So it's almost symmetric. In the weakly asymmetric case, it's possible to resum to solve this equation and to resum the series and to draw pictures. So these are pictures of the large deviation function for different values of the asymmetry. And what you see is that if the asymmetry is not too big, the curve is smooth. But when the asymmetry goes beyond a critical value, which is 8 pi here, you have a kink that appears in your large deviation function. And the fact that there's a kink here is remindful of a phase transition. Remember this morning in the Eisen case, Thierry draw, what did you draw? F, the free energy has a function of H. And for low temperature, there was a kink. And for high temperature, it was smooth. So this is the perfect equivalent here. There's a kink appearing not for temperature here, but for asymmetry, large enough. And the system undergoes a phase transition. So this is one more element, sorry, one more element of proof, not proof of belief, faith, rather. And that large deviation functions play a role analogous to thermodynamic potentials out of equilibrium. At equilibrium, thermodynamic potentials have some analyticity break at phase transitions. And here, large deviation functions have some analyticity problem, some kink behavior at phase transitions. So here the phase transitions that occurs is also again very simple to explain in physical terms. For small asymmetries close to symmetric case, the profile is flat. And when the asymmetry becomes strong enough and you want to draw a typical current enough, then your density profile has to become non-flat. So you have a travelling wave, a kind of soliton which will be developed in your model and which will turn around your system. So again, qualitatively, it's easy to understand to do the precise calculations is less easy. And by the way, this calculation which was done by Bertrand's art, there was a prediction using macroscopic fluctuation theory by Thierry and by Bernard de Rida which predicted that the kink of the large deviation function should exist, at least from the macroscopic point of view. And here the microscopic calculation perfectly matches with this prediction. Okay, so let's go now to the second part. How much time do we still have? 20 minutes? Okay. So the second part, which is the open exclusion process, so which is a system which is closer to reality, quote-unquote. So here you see the problem is that even the stationary measure is not trivial. The system has two to the power L configurations. And we have no Gibbs measure underlying it. And if we just take this very general system, we don't know in the stationary state what the probability was the likelihood of seeing, for example, this precise configuration, which is 0, 1, 0, 1, 1, and so on and so on in the stationary state. So already the invariant measure is a difficult problem which was solved more than 20 years ago by Derrida, Evans, Hakim and Paskier. And they had this very nice idea to use again a quantum mechanical point of view, although the system is purely classical. And the idea is the following. I just come back to the picture. So a configuration here is a string of 0, 1, 1s. Okay. And so you can represent it as a binary digit. And the idea is that in fact there exist two operators. Let's call them D and E. And instead of writing a binary digit, I will write this configuration as a word in these two letters, D and E. E for empty, D for occupied. And for example, this configuration will be just E, D, E, D squared. E, D, E, D, E, D. Okay. So this is a configuration which is written in these two letters alphabet. And their idea was that if D and E are well-chosen operators, they satisfy a well-chosen algebra. And if we take a trace over this algebra, then the stationary weights will be given by this trace on this well-chosen algebra. Okay. It looks strange, but it works. And don't forget one thing. This is a finite-size Markov process. Peron-Frebenus tells us that there is a unique stationary state. So any bad trick to compute it is good. Okay. So don't ask questions if it works. It works. And it works. So the idea is to choose these two words, D and E, these two operators, satisfying a simple quadratic algebra. Okay. And the trace is, in fact, a vector element. So you have a co-vector on the left and a vector on the right. Or a ket. And Dirac would say on the left and a bra on the right. And the ket and the bra are eigenvectors of linear combinations of these two operators, D and E. So, again, you just choose this algebra, compute the trace of any D, E, D squared, blah, blah, blah, using this algebra, and this spits out the steady state of your Markov process. And it works. So this can be more explicit, but you can use it to compute the phase diagram of your model, the density profiles, the correlations, anything in the stationary state. So, in one word, this algebra plays the role of the Gibbs measure. This is what replaces the Gibbs measure for this kind of models and many others. It has been used in many variants. It's not totally universal, but there seems to be many one-dimensional non-equilibrium models where the algebra trick seems to work well. For the connoisseurs, this algebra is related to algebraic better results. So it has to do with integrability. It's not... This was guessed from the blue, but after 20 years, people understand more how to construct it using integrability. But this is worth a full seminar. Okay, you can find representations of the algebra if you want to know more, infinitely more. I recommend to read this review by Brighton Evans on the 20 pages on algebra. So I recall you the calculation of this morning in the stationary equilibrium case. The free energy, the thermodynamic free energy, tells you about fluctuations of the density profile. Well, this is a calculation based on partition functions and on the Gibbs measure, one can redo this feat using the algebra because now we have one analogous of the Gibbs measure for this exclusion process. This is a very hard calculation, but at least for this system out of equilibrium, using the matrices, one can compute the probability of seeing any density profile between two reservoirs and density rho a and rho b. And this was done by Derrida, Le Bovitz and Speer 12 years ago. So just to show you a bit what it looks like, if we were at equilibrium, we would just have 1 minus x log 1 minus x plus x log x, which is nothing but steering formula, the basic equilibrium statistical mechanics. Well, this very simple formula is replaced by something much more complicated, which is non-local, which involves some non-linear ODE solution and the boundary conditions. And indeed, this basic function log 1 minus x over 1 minus y appears there, but in a very, very indirect way. So the solution for the non-equilibrium case is non-local and much, much more complicated than the equilibrium case. And one important feature is that you could think that, okay, I just take the equilibrium result and I just replace the density by the local linear density. This is wrong. This is just completely wrong. So there is no way of extrapolating this formula at equilibrium into that formula out of equilibrium. So this was for the density profile, the first question. This is about the current. So I'm going to rush through it. It has a structure very similar to the previous one. We want to compute how many particles went from the left reservoir to the right reservoir during time t. So it was called yt before. Now I call it n of t. And it was called qt this morning. The total number of particles that went from left to right during time t. So again, what we can compute is the exponential generating function. And this exponential generating function makes this cumulant generating function appear. And this cumulant generating function is a dominant eigenvalue of a deformed operator. Each time you add a particle, you add a factor exponential mu. Each time you take a particle from the system, you put an exponential minus mu. So again, you have traded your statistical problem into an eigenvalue problem. And this matrix, deformed matrix, can be diagonalized using a generalized matrix product on those. And the structure of the solution is very similar. Again, parametric functions with combinatorial coefficients, which now depend on the boundary rates and on the asymmetric rate q. Again, these guys are residues. The contour is much more complicated, but they are again residues. So again, everything is in this function phi k that you can put together into this WB. And as before, WB is the solution of a self-consistent equation which looks exactly the same as the one before with the same kernel. But I don't know if you remember this simple rational function which was 1 plus z to the power n divided by z to the power l or the opposite. Well, now it's replaced by a much, much more complicated object. But if there are some connoisseurs in the room, this complicated object is in fact, again, some object which appears in the calculation of partition functions. It's called the Ashken Wilson generating function for these partition problems. So it's not a unknown object. It's a kind of natural object that appears in this game. And this allows you to compute for finite-size systems the generating function of the cumulant to go to the large division function by Laplace, by Lejean transform in the large-size system limit. So you see all this horrible calculation to get a rather simple formula in the end. But you have all the corrections in finite-size. And there's a phase diagram. You can study much more than just taking the limit for infinite-size systems. But this limit for infinite-size systems was again obtained by Bodino and Derrida using MFT. So on one hand, you have the exact solution using combinatorics integrability. And you can take the infinite-size system limit and match it to this type of journalizing you at all and solve these equations of a large type to get this formula. So things match well. And these kind of calculations were important when people were not completely sure about the relevance and the correctness of these variational answers. And in some special cases just to flash them rapidly, this can be purely explicit. So it's not again purely totally abstract. You can have numbers. And in particular, you can compute the skewness, the third cumulant of the current. So this is an exact combinatorial formula valid for any system size. And if you go to the infinite-size limit you see that the skewness goes to a finite number which means that even in the infinite-size system limit the current has non-gaussian fluctuations and the third cumulant is non-zero. Ok, it's small. It's less than a percent but it's non-zero. So it's really non-gaussian. I want to tell you during the last 7 or 8 minutes about the infinite-line case. And this infinite line involves a totally different type of mathematics. But it's again based on integrability which is the light motive of this talk and better results. So we now we consider let's say a finite number of particles on the infinite line and they are hopping p and q with exclusion. Well the basic quantity to compute will be the probability to find the particles at y, y, y1, y2, yn knowing that they were at x1, x2, xn at time t equal to 0. That's the green function of the problem, the propagator. And thanks to better odds so by using linear combination of plane waves there exists an exact formula for this propagator. So of course I went very fast but here you recognize the fugacities, the sum of permutations of a permutation which is typical of the better odds of good memory and you have memorized the better odds equations that I wrote 20 transparants ago or maybe 50 transparants ago they were of this form. So this is in fact very closely related to the better odds on the periodic ring but this is only open infinite system. So this formula was initiated by Gunther Schutz and then really developed by Tracy Widem in the last four or five years in the papers and really developed this formula. But this is an exact formula the problem is to be able to do something with it looks horrible. It's a sum over all permutations of factorial n where n is a number of particles it's a big formula but there is some combinatorics hidden in it and you can reduce it at least in some cases to some much nicer looking formula. So here comes the importance of the initial condition because the initial condition here has an ingredient into your green function and if you start with the simple initial conditions that all particles were just lining up on the negative side at t equal to 0 so 0 minus 1 like that and we take the special case where particles can only jump to the right with rate 1 no backward jumps then you can take the previous formula and massage it with the case result. So we want to compute the total current so the total number of particles that flew through the 0,1 bound let's call it q of t at last the same q of t as this morning and we want to compute the probability that q of t is larger than m it's an integral number I want to compute the probability of having more than n particle in fact on the right side after time t well this is in fact the probability that the mth particle here has jumped through the 0,1 bound the same thing because of exclusion while starting from this formula and using some quite elementary in that case and not too difficult it's a two page calculation with determinants you can show that this probability is given by this integral so this integral evolves a square of a van der mond and some exponentials and some integral over the cube 0,t to the power n so it's a very simple and compact there was a sum of factorial things before and now it's just an unfold very compact integral so if you have seen some talks on random matrix theories they should just strike you because these kind of integrals celberg integrals related ones appear all the time in random matrix theory and indeed this integral has a very precise interpretation in some ensemble this is the distribution of the largest eigenvalue in the Laguerre ensemble I'm not going into the details but this is a kind of ensemble like GOU and so on and so on and then it's possible to use all the knowledge which was developed in the last 20 years on this distribution of the largest eigenvalues and in particular Johansson solved this Tazeb case in 2000 and he was able to show that q of t behaves like t over 4 plus t to the power 1 third times a random variable and this random variable is precisely the distribution of the largest eigenvalue of a random matrix ensemble and it follows the so called traci-widom distribution so that's the connection between the exclusion process and random matrices through this better ansatz formula that you can transform into random matrix integral it's one way of seeing this connection and this is how also these traci-widom distributions of eigenvalues of dominating eigenvalues of random matrices appear into the game so that's one interesting feature there is another nice relation between Tazeb and corner growth as I told you the exclusion process is related to the Kardar-Parisis-Hung equation and the configuration of particles can be drawn as a partition or as an interface in the second dimension so each particle corresponds to a slope minus 1 a whole to a slope plus 1 so this configuration of particles is nothing but this interface ok if you look more precisely of what's happening so if you draw all the squares you will see that the position of the rightmost particle corresponds to the length of the first line of this young diagram because you can interpret this part as a young diagram so configuration of particle and the exclusion process is an interface if you fill the squares which are missing you get a young diagram and now you can interpret the position of the first particle as a length of the first line of the young diagram the second particle the length of the second line of the young diagram so there is a perfect mapping between both but statistics of young tableaus is an old subject which has been studied a lot and it's also known that the first line of a young tableau is related to the length of the largest increasing subsequence in a randomly chosen permutation so what do I mean by that let's take the numbers from 1 to 7 let's take a random permutation of them so this one and suppose I want to extract an increasing subsequence so for example 1, 3, 4, 6 is an increasing subsequence of course 1, 6 or 1, 7 also is an increasing subsequence and I'm interested in the largest one I can extract from any randomly chosen permutation there can be a few but few largest but let's call the length L sigma of the largest increasing subsequence in a given permutation while this length is nothing but the length of the first line of a randomly chosen young tableau so these two things are the same and Ulam asked the problem about the statistics of this L of sigma this length of the largest increasing subsequence so now we are convinced that everything in the world is embodied in TESAP on the exclusion process so indeed by doing these mappings back you can relate that to the position of the dominant particle or to the current in the exclusion process and indeed the statistics of the length of the largest increasing subsequence in a random permutation by making the mappings it grows like square root of the number total 7 before plus n to the power 1,6 which is an avatar of the 1,3 before just by rescaling times the tracy-wedom variable so again a random permutation is nothing by RSK by Robinson-Clinch instead two young tableaus a young tableau is a corner growth and the corner growth is exclusion process and you know plenty of things about the exclusion process because it's solvable by better results so that's how integrability can be used in a very indirect way so one but last transparent just to remind you that this is supposed to be a physics talk so there is some experimental result so the exclusion process is a discretization of the Cardard-Paris-Zong equation the Cardard-Paris-Zong equation describes the growth the dynamics of interfaces and using similar technologies much more elaborate a few groups 5-6 years ago Sasamoto-Shipon Corvin-Amir-Castel and in Paris the Tsanko-Loudoussal Calabres and also we're able to solve the one-dimensional Cardard-Paris-Zong equation by solving I mean there are plenty of solutions plenty of questions I mean that the statistics of the height over a point of this random interface was fully understood and investigated so not only the average or the variance but the distribution the full distribution and of course this is related to the Tracey-Widem-Low which appears in all these game okay so the Tracey-Widem-Low I didn't go into the details is something quite abstract involves Panlevay non-linear equations and so on and so on and it's in fact quite complicated and not so easy to draw even numerically well now it has been implemented in Mathematica that's a great for Tracey and Widem they are a function in Mathematica just type it and draw it but the beautiful thing is that a group in Japan the group of Takeuchi and Sano conducted experiments, real experiments in liquid crystals there are many phases in liquid crystals and in one of these phases you have two types of arrangements two types of phases in these liquid crystals and one is growing into the other it's different but just think about a piece of ice growing in water it's not the same thing but one phase of liquid crystal which is darker in the camera which grows in another one and they were able to monitor in real time the growth of this phase special phase of liquid crystal and to investigate very precisely the statistical properties of the interface after subtracting the average and they were able to obtain histograms for the distribution of this interface and they showed that indeed it coincides with the Tracey-Widem distribution there is even something much more elaborate there are different Tracey-Widem distributions which depend on different ensembles GUI, GUI these different ensembles correspond to different initial conditions I told you only about the initial condition where everybody is on the left and nobody on the right and if you translate it into the language of liquid crystals or interfaces it corresponds growing from a circular interface or growing from a flat interface this gives you two types of Tracey-Widems they were able to conduct experiments in both cases and show that indeed the histograms correspond respectively to GUI and GOI Tracey-Widem in each of the cases so these are very very precise and then marking experiments ok so just I hope I have convinced you that what Thierry already did this morning that the exclusion process is the alpha and the omega of human knowledge so are large aviation functions at least it seems that they are important for non-equivalent statistical mechanics I didn't tell you about the Galavotico and Symmetry that Thierry alluded to this morning but this is another important feature that you can check in these models a nice feature of the exclusion process at least in the mathematical world is that it's relation related to growth models and who says growth models is also young tableaus corner growth and ultimately to random matrix theory so it's a kind of central point where plenty of different theories converge but this is not at all the end of the story it's in fact only the tip of the iceberg there's a whole field which has been developed recently which is the field of integrable probability in particular by Borodin, Gorin, Corvin Sasamoto and many other people and the exclusion process is one special case of a whole class of integrable stochastic models known as McDonald processes or even some higher vertex models and in all these models which are strongly non-gaussian it is possible to get some clarity and to better on that some explicit formulas for some probability of some observables to analyze them using asymptotics and to derive some new universal laws such as the tracy-wedom distribution which are believed to play a role akin to that of the gaussian law at equilibrium thank you Any questions? You're saying at one point that's a given configuration of M.T. and Apostle to a given string of two operations and then you choose an algebra to solve egress how do you choose the algebra Is there some constraints? That's part of the secret So as I told you the real thing is that at the beginning they guessed it really they just guessed it then they also started guessing other models and the body of knowledge became bigger on these models so there were more and more algebras floating around and people tried and so on more recently there is a series of work where you can try to construct that using not really a variant a chapter of better-on-zards which is called algebraic better-on-zards in which the integrability technique makes naturally some operators appear which satisfy some algebraic relations and the operator, so this is kind of standard things since the 80s this algebraic better-on-zards was developed by the russian school for the AF and other people in the 70s and 80s so they are books well known so this is a well known body of knowledge and there is a kind of indirect I would say not fully understood yet but there is a lot of progress recently way of extracting this kind of quadratic algebras starting from this constructive method of algebraic better-on-zards so you have to learn the algebraic better-on-zards first and then try to use it to extract these algebras or you can try to guess it