 wrote this down again here. So what you're given is you have the space rm, and you have some subspaces h i, k of them. Here the dimensions are called them d i, just for simplicity. And now you're given linear maps from rm into h i. And you assume that these maps are essentially isometries, in other words, that b i, b i transpose is actually the identity on this Hilbert space h i. Now, furthermore, assume that you have some constant c i, non-negative constants. I mean positive, in fact, otherwise you wouldn't count it. So that the sum of the c i is b i transpose. Notice here's b i, b i transpose, here's b i transpose b i. This equals the identity of the whole space. Now, if you take any functions from your Hilbert space into the positive reals, it doesn't really matter. I mean, you need them positive because you raised them to the power c. Then you compute this product here. This is always less than the right-hand side. So what this, in some sense, says it's a kind of a correlation inequality. Because you see, these spaces are all pretty crooked. They don't have to be in anything aligned with rm. They can just hang in there somehow. And so this is, in some sense, a very highly correlated integral. Nevertheless, you can show that this is less than this number here, which is just involves the integrals of the Fj's and the constant is sharp. And it's, in fact, saturated by Gaussians. So let me make an example where I think most of you have used this inequality implicitly at some point in the proof of the Sobolev inequality. So let's make an elementary example. Sobolev inequality says the l1 norm of the gradient is greater or equals some constant, maybe, the ln over n, n minus 1 norm. And u is a function from rn into the real state. And I choose it to be u, certainly, c1, say, with compact support so that all the computations which I do now are fine, right? So how do we do that? I mean, you remember this from the textbooks, like Craig Evans's book on PDEs. I think it's totally standard. But what is it? You start out by writing that ux, by the fundamental theorem of calculus, you can always write this as the integral from minus infinity to xi. And then you take the partial derivative du, dxi. And then you have x1 up to and here in the i-th place, you put in t. And then you go on up to xn, right? I mean, that's obvious. Now you notice immediately that u of x in magnitude is less than, I mean, let me just do it here. I mean, it's so elementary, right? Save time. So this is less than or equals when I just put an infinity here and put some magnitudes here, right? So what you get, let me call this function here gi of x1 up to xn. And you notice that this function has no decay in the xi variable, right? No decay whatsoever. And then what you do, you compute the integral of u of x to the power n over n minus 1. And you estimate it by the integral of the product i equals 1 to n gi of x1 xn. Then you integrate dx1 dxn. And I guess I forgot a 1 over n minus 1, OK? Now I claim that this right-hand side, the way I can write this here, I can write it in the following way. If I choose to do the matrix bi of x when applied to x, it's just the one which gives you xi minus 1, put the 0 in the i-th component, and then you go on. That's a linear map, right? And you can easily check that this linear map. So what is your Hilbert space? Your Hilbert space is simply the subspace where you just drop one coordinate, right? It's just one of these hyperplanes. So therefore, you have this guy's here. What are the bi's that defined it for you? The bi bi transpose obviously is the projection. I mean, it's the identity on this space. And now what about this relation, OK? Well, if you sum up the bi transpose bi, that's a kind of a triviality, right? What is it? You get a diagonal matrix of 1's, and at one point, you miss 1, and you have a 0, OK? So therefore, what is the sum? When you sum this from i up to n, from 1 to n, well, that's just n minus 1 times the identity. So therefore, you go here and put the 1 over n minus 1 here, and you get the identity of your whole space, right? So you have verified that. So what does this theorem say? It says that when you erase these functions g i to the power 1 over n minus 1, this is just less or equal to product j equals 1 up to n, the integral of the g i's of x1 up to xn dx1 dxn. I shouldn't say that. And I should say here dx i hat. Because I'm not integrating over this variable, right? I'm integrating now over this Hilbert space, hj. And then you raise this thing to the power 1 over n minus 1, OK? So in some sense, when you look at the textbooks in PDEs, the way you prove it is you use a repeated argument with Helders-Hineck-Waldsson induction, right? But I think it's instructive. It's a totally elementary example, I know. But it's instructive because in some sense what you're using here is an old inequality which goes back to the late 40s by Loomis and Whitney. They invented this whole subject. Now, of course, that we know what this is, right? That's just the product of the intervals of the du dxj's magnitude dx over all of rn. And you raise it to the power of 1 over n minus 1. OK, and that implies, of course, the Sobolev inequality, right? It's obvious, OK? So this is how this stuff works. Now, I forgot the product here. So when you replace it by the gradient, you have the power n over n minus 1, and that works out perfectly, right? So it's a completely elementary example, but it shows sort of an application. Yeah, show of you. This is for a convex set. So this is a different theorem. You see, Braskam and Lieb, they had in the 70s a whole bunch of papers, right? And this paper is this famous paper where is it when you take a log concave function and you take a marginal, you get again a log concave function. It's very specific for log concave functions. And then what they do is, as always, you look at the Dirichlet eigenfunction. How do you think of the Dirichlet eigenfunction? You think of it as the limit of the heat kernel, right? So what is the heat kernel? By the trotter formula, it's a huge integral. But since the potential is convex, the domain is convex, you can see that this integrand is a log concave function. And now you take marginals. It's still a log concave function. So the ground set must be log concave, right? This is different. This is very different. No, no, you see, this has nothing to do with log concavity, right? The f's are just arbitrary functions, OK? That's different. I mean, it's interesting. OK, well, let me tell you. The log concavity plays, of course, a role in many things, right? And so there's a little story. Namely, remember that this has something to do with Jose's talk about rearrangements. So when you look at the general rearrangement inequality, right, you take a whole product of functions, right? And you rearrange them and things somehow go up. The way Braskham and Lee prove it is they reduce it to characteristic functions, and then you have these characteristic functions, right? And you start moving them together. And they have a flow, a discrete flow. And whenever they join, then you define this as a new characteristic function, right? And it turns out that this flow, this interval, is always an interval over log concavity functions. And then you can use again some kind of, it's like a Bruminkowski type inequality, and that gives you the rearrangement inequality, right? It's extremely clever. Turns out, however, it was already discovered with precisely the same proof in 1950 or something by Rogers in number theory. So this is how these things come. So it's very hard in this business, right? I mean, there has been a huge amount of work in this kind of thing. All right, so now let's go on. I would like to give you a proof of Braskham bleep, which is actually not very difficult. And it's a proof using, as you might figure out by now, heat kernels. And this was an idea which we developed jointly with Eric Carlin at Elliott Leib, and for a special case where this functions these matrices are rank 1, right? Just projection of a single vector. And that was then picked up by Bennett, Carberry, Christ, Christ, and Tao. They did this in the general case. And what I would like to show you is a very simple proof using these ideas of that inequality, all right? So let's do that. So how does it go? So the idea is the following. You start out with your function. You think you have given your functions here, right? And these are functions which we have to imagine they live in this Hilbert space. And now what we do is we take F i of B i v. If you allow me, let me drop the i for the moment, right? I'm just taking a generic matrix, right? A generic matrix from these guys here. So this F does that here, because I don't want to constantly write the indices. And now what you do is let me get the variable straight, otherwise it get confused. So now what you do is you convolve this with the heat kernel. And let me just, so what did I do? Yeah, here I put the w. And here I put the v minus w squared over 4t. You integrate over all of rm. And then you put, of course, your favorite power here, which I think is this one, OK? All right, so now what I'm claiming is that this is a function which looks like this. It looks like, let me get some more chalk. It's a function of b, v, and t. You see, that's at the face of it not totally obvious that this is the case, right? Because I mean, what does this heat kernel do? It just plays around with this function. And there's no question, I mean, it's not clear at all that this isn't generally true. This has to do with these kind of assumptions here, which I made, OK? All right, so let me convince you that this interval is really of that form, OK? So how do we do that? Well, since you know that b, b transposed, is the identity on your little Hilbert space, then you can say that b transposed b is an orthogonal projection. I mean, obviously, when you square it, you see, when you square this one, you have a btb, b transposed b squared, it's btb, btb. This together gives you the identity on your Hilbert space, and that's just btb. And it's obviously orthogonal because it's self-adjoint, OK? Good. The next observation is, let me erase this now, the next observation is that when you map p with b, and again, on account of this relation, right? When you hit this by b, you get the identity, you just get b. So what you figure out is that the b is really a map from the range of p into h i, and it's an isometry, OK? That's what you need, all right? Good. So this is elementary, right? And this is just this assumption here, that this is a crucial assumption. Let me erase this now, and just remember what the p is, OK? All right, so now that then what do you do? You start splitting variables, right? So you take this whole integral, 1 over 4 pi t to the power m over 2, and then you write this as e to the minus. Here I'll leave the v. What do I do? Yeah, I put here pi p minus u squared e to the minus pi perpendicular v minus u prime squared. And I will explain right away what I'm doing. Then I have f of b. OK, so what have I been doing? Well, I have been plugging in the pi v. So what is the u? Where do these integrals run over? The u, the last one, runs over the range of p i. I hope I'm doing this right, OK? This integral here runs over the range of p i perp, right? So these are two orthogonal, so I have split the whole space r m into the range of p i, and it's orthogonal complement the range of p i perp, OK? Good, and now you see what happens. When you look at this integral over u prime, this just integrates out the heat current. You don't see it, you agree? It just integrates it out. You don't see it, but you lose a power on this 4 pi t. And what is the power? Well, at the end of the day, you have an integral over a d dimensional space. So what you get is I can just erase now d here. This is gone, this integral d u prime is also gone. Agreed? And I can just forget about this here, OK? Now, what is this? Now, the u is already in the range of p i, so I just have to observe that p i v minus u, I can write it this way, because u is already in the range of p i. When I square this, what is this? This is nothing but b i v minus u, because b i transpose b i. No, that's right, that's right. What is the p i? The p i was the b p, sorry, forget about the i, p is b transpose b, right? So that's what it is. OK, so let me do this again. Wipe some more. What you're left with is just b v minus u, I can forget about, b u. And here, this p I can forget. And at the end of the day, what is b? b is an isometry. So when I change variables, I do not change the Jacobian. So I can simply call this w, w. And this time, I don't integrate over the range of p i, but I just integrate over h. And you see, that's precisely what I've been claiming, OK? Moreover, what this formula also shows is that when I do this, that the right side doesn't change. You see, that's the beauty. Why? Because what you have done is you have applied the heat kernel on this function f, right? Integrate it over this Hilbert space here. And when you take the L1 norm over this Hilbert space, the heat kernel just disappears, OK? Is this clear? I hope this is not too difficult, right? It's a simple linear algebra thing, right? It's no big deal. OK, so this is justified, right? And absolutely crucial is for this justification that we had this assumption here. Good, so now let's go on. So now we are in the same situation as we have been in the last hour. What do we do? The right-hand side under this heat flow is fixed. The left-hand side, we have to show it's increasing. So let's try to do that. So let's see what the formulas tell us. So now, I usually, as always, I write f as e to the phi i. So now I need the indices again, right? And remember, these are now functions which depend on t. And what you learn is that the partial derivative of d phi i dt is, of course, Laplacian phi i. And this is, I have to be really careful. This is the Laplacian with respect to the variable v, right? Because that's the way how I move my functions. This was my definition, remember? OK? Plus the gradient with respect to v of phi i squared. So once you sort of accept this, then, of course, you can differentiate d dt, the left-hand side, which is this gadget here, which is nothing but the integral of e to the sum c i phi i of bi, b, and t. And then you integrate dv over all of rm. That's what you do. So when you do that, you do this differentiation. What do you get? Of course, it's completely elementary. The sum of the c i, the Laplacian of with respect to v of phi i plus gradient with respect to v of phi i squared times the exponential integrated over rm. OK? That's what you do. So not no miracles or any kind there. So now, of course, you see, at this moment, you don't really know what to do with this term, right? So it's always a good idea to integrate it by parts, right? When you integrate this by parts, we get that this derivative of this integral e to the sum c i phi i is nothing but the integral of grad v phi i squared. That I keep. And then I get minus. And what I get is the sum c i grad v phi i squared e to the sum c i phi i. Why is that? Because you see, when you integrate by parts in v, then you hit this guy. And what you pull down is, you see, what you have here is then what's left over is the sum of the c i grad in phi i. That's what's left. But from here, you get also the sum of c i grad in phi i. So you just get this guy here squared. This is right? I think the sum of squared. Yeah, I forgot the sum. Thank you. Right. That's absolutely crucial, of course, right? Thank you. OK? So far, it's good, right? This is no big deal. OK. So now, what would you like to see? We would like to see that this gadget, this whole integral, is non-negative. And at this moment, you agree with me, there's certainly no chance that you can use somehow this measure. This has come alone from this term. Yeah. Yeah, yeah. Oh, yeah, yeah, absolutely. So thank you. I mean, without you, I would be completely lost in this business here. Yeah, you see, that's the important point. Here's the c i. And here, in some sense, the c i squared. I mean, it's c i, c j, right? When you multiply it up. OK? So far, I think so good. Good. So now, as a next step, so let's just concentrate on these parentheses. So what is it? When you think about what this gradient is, remember what you do. You take this function phi i of b i v t, and you take the gradient. When you work it out, what it is, it's the b i transposed, OK? And then you get the gradient, if you like, in its own variable of phi evaluated at b i v t. So what is this? This here is a vector in your space h i. That's what it is. I mean, think of this h i as a linear subspace. The b i pushes you into this linear subspace. So when you work out this derivative, you get precisely this gradient in the subspace, which is a vector in the subspace, h i, and you apply b i transposed to it, OK? So now, what do we have to do now? And I think I don't need this computation anymore here. Let me call this vector here v i. Because there's nothing I can do about this vector. It's just a general vector in the h i. And so what is the inequality which you have to show? We have to show that the sum b i, b i, sorry, here I have to put in b i transposed. That's what comes in here. And I have to show that this gadget is great to equals. Now what is this? This is just, let me see that I didn't screw this up here. This is always my problem here. Sum, let me call it y squared. And what is y? C i, b i transposed, OK? It's good. Just follow simply from this differentiation here, right? Using this file. OK. So you see, so far, everything was smooth writing, except we have never really used this condition here. And that's what we have to use now. So let's compute y squared. And the way I'm going to write it is, it's going to be the sum on i, c i, b i transposed, v i. And then you take the inner product with y. This is an inner product. And now what you do is, you move this b i transposed over. So that's the same as the c i, b i dotted into b i y. Now remember, this constant c i are positive. And now what we're going to do is we're going to just do a simple Schwarz inequality. This is less than or equals the sum of the c i, v i squared to the power one-half times the sum c i, b i y squared to the power one-half. Now two things. What can you say about b i transposed to e i? Well, remember, b i, b i transposed is the identity. So therefore, I can forget about this b i transposed here. Because after all, what is this? It's the inner product v i with b i, b i transposed, v i. b i, b i transposed is the identity. I just forget about this. So therefore, let me write this again, the inequality which I really would like to prove is this one. We're almost there. Look here. We have y squared is less than or equals sum c i, v i squared to the power one-half. That's good. What is this expression here? This expression is nothing but the sum c i, b i transposed b i in the product with y. But this is the identity. So therefore, this is y squared. So therefore, we have this estimate here. You cancel the y, you square it, there you are. Completely elementary. Once you have this idea of heat flow, and you understand what these things do for you, it's completely smooth sailing. So this is the proof of the brass complete inequality. Now, notice, by the way, when do you have cases of inequality? Well, no, I'm not done yet. Wait, wait, wait. I only have shown that the derivative of this gadget goes up under the flow. I mean, this left side is increasing. That's what I've shown. So what we have to show is what is the limit. We have to compute that. And so we go back again. Let's plug things in. Let me just. So remember this formula. This is really our f of b, v, t, right? So let's go and compute. What is the product over j, f of vj, v, t, power cj? Well, that's equals to 1 over 4 pi t to the power cj. Sorry, you shouldn't say that. Yeah, cj dj over 2. Sorry, let me say this better. Sum cj dj. Can you still read this up there? So where does this come from? You see, each term has a 1 over 4 pi d over 2. And then I raise it to the power cj. And then I take the product, which means I take the sum in here. That's number 1. And then I get a product. Then I have a product of such integrals. I'm going to write this slightly differently. I'm going to write this as e to the minus the sum on j. And here I put a bj, v squared over 4t. So in other words, what I do, sorry, I squared this out. And deleting all the terms are precisely these guys. OK? And then I have left over an integral, a product, over integrals. And how do these integrals look like? It's e to the minus, so here I call it wj squared over 4t. And then I have a plus 2 bi b dot w divided by 4t. So that's all up here in the exponent. I integrate over the h i's. And then I have here the fi of wj. i equals j. That's my convention. By the way, I have to apologize. The letters n, u, k, and r, they look all the same in my handwriting. So OK. OK? It's good? So now you see, now you let t go to infinity. Now what happens? This term, what does it do? This term converges just to the integral. And I forgot something else. I forgot to raise this to the power c i. And likewise here, I have a c j. You have to raise it to these powers. So what happens? Well, as t goes off to infinity, what do you get? This looks asymptotically for t large. Well, let's look at this guy. What is the sum c j d j? Go to this relation and take the trace. If you take the trace, what do you get? You get the c i. What is the trace of that matrix? It's going to be the dimension of the Hilbert space, because it's an isometry. So that c i d i, they sum up to m. So we have the relation sum c i d i equals m. So therefore, the asymptotics look like this. You get a 1 over 4 pi t to the power m over 2. And now what is this? Well, this, of course, is precisely using this relation the length of v squared. That's what it is. And finally, what does this guy behave asymptotically? These things behave asymptotically as the product over the integrals f i over your Hilbert space h i to the power c i. And finally, what you have to do in this inequality, you have to integrate over the v's. When you integrate over the v's, you just get 1. This is just 1. It's the heat current, right? Normalize Gaussian. That's it. So you increase monotonically towards this right-hand side, because you see this is precisely the right-hand side. This integrates to 1. You're done. And in fact, you can also figure out what the optimizers or the optimizers in general have to be Gaussians. All the one has to be a little bit careful. Sometimes it can be something else. But Gaussians will definitely do. Good? So I think this is not a big deal, but it's a pretty good result, right? As I say, it's a correlation inequality that you have a very highly correlated thing here, and you can still estimate it in a reasonable sharp way. Good. So far, so good. This is what I wanted to tell you about the Brass-Comblib inequality. There are some other things which I will tell you this later on. I would like to go now to get back to the Katz-Moll. And so what's the news so far? Well, they haven't been too good, right? As I explained to you, if you want to start the approach to equilibrium, if you do it with the gap, the gap is not a really good notion, because the L2 norms of probability distributions are too large in general, then you can do it with the entropy. And with the entropy so far, we only got a rate which actually vanished as n goes to infinity. So in other words, you need astronomical times to see something like equilibration. Now, you see, let me just explain a simple physical idea. If you take a bottle of hot water and you dump it into the Adriatic Sea, what will happen to the bottle? The bottle will start cooling down, and the Adriatic Sea wouldn't see anything of that, do you agree? It's so huge. It doesn't warm up, really, because of a bottle of hot water. So in other words, in this system, the Adriatic Sea acts like a thermostat. And in fact, for the weather patterns, that's an important point, right? It really acts like a thermostat, in that sense that it really regularizes the temperature in the environment around the sea, right? It's a very important point. OK, so therefore, we would like to understand, in the context of the Katz model, interaction of a system with a thermostat, OK? Interaction, so system, so how do we think about this? Well, the way you could think about this is that you have this system of Katz model has M particles, right? It's a finite system. And what this system does, it interacts with the reservoir, and how does it interact? It scatters with particles in the reservoir. But remember, the reservoir doesn't change. It just stays what it is. So what you do is you pick out with a certain probability distribution which has to do with the thermal state, you scatter with your particles in the system, and then you throw away this particle which you pulled out of the adrodic C, so to speak, right? So what you do, you write down a master equation which looks like this. df dt is equals M QM minus identity F. Well, that's the old Katz story, right? Which we already know. But now we add on another term. This is just a coupling constant. Sum j equals 1 up to M. pj minus the identity F. And I call this whole thing L infinity F. And what is pj? pj looks at the face of it quite complicated, but it really isn't once you think about it. Let me write down what it is. rho of theta d theta. Then I put here, OK, I mean explain. Let me explain what this is. It looks a little bit complicated. So let me write down what the star is. wj star is equals to v1. Then here at vj, you put in cosine theta plus w sine theta. And then you go all the way to vm. That's the vj star. And the wj star is nothing but minus vj sine theta plus w cosine theta. So let's think for a moment what this is, what this means. So you see what this is telling. This here is known as, I mean it's the Gaussian, right? And what is beta? Beta is 1 over kT. k is the Boltzmann constant, and T is the temperature. So this is how in statistical mechanics the temperature actually appears. And this is what you call, this is an equilibrium system. And what you do, you take out of your thermostat a particle. And this particle is distributed according to this Gaussian. That's this probability. And then what you do is, you take this particle and you make a collision of this particle with the jth particle in your system, all right? So then you take the average, rho 30 d theta. And then you integrate, because in general, these velocities of these particles are distributed according to the Gaussian. You just integrate them with dw, right? So this is, you can make an absolutely clean explanation of that in terms of probability, but I'm not going to do it because I have a little bit limited time. So that's this system. OK. And now you ask yourself, well, that should have good properties, right? Because when you have the small system interacting with the thermostat, you should really see how it goes towards equilibrium. And it should do so fast, OK? So here are the facts. So first of all, a little observation. The energy is not conserved anymore, of course. In fact, when you calculate the energy, call it kinetic energy, that's 1 half, the sum, the integral of the sum of the vj squared integrated against f, the solution of this equation. You can actually figure out that dk dt is equals to minus mu over 2 times k minus nm divided by 2 beta. And this is what people know as Newton's law of cooling. And you can also easily see that gamma m, which is beta over 2 pi to the power m over 2, e to the minus beta over 2 sum of the v squared is the unique equilibrium split, OK? So your system starts out in some state, right? Interacts with the thermostat and really drops down towards this equilibrium state. You can easily check that when you stick it into the equation. Here you get 0. Why do you get 0? Because this function is a rotation invariant. So the cut doesn't see it. And you subtract the identity. Don't forget, you get 0. What does this guy do? Well, when you stick in a Gaussian here, you actually reproduce this very same Gaussian. So this is also 0. But it's more, right? It's really an equilibrium state. OK. I'll just tell you the results. Now you can go and write down the entropy with respect to this equilibrium state. So what is it? S of f is the integral of f log f divided by gamma m, OK? Reasonable, huh? It's the entropy, relative entropy. Entropy with respect to the equilibrium state. You would expect that these guys behave nicely. Theorem, bonnetto, y d and a tan, by itself. By the way, Federico Bonnetto is really one of the experts in non-equilibrium statistical mechanics. He's a student of Giovanni Gallavotti. So he sort of has to write pedigree. So when you look at f dot and t, so in other words, you solve this equation, you plug it in, right? This is bounded above by e to the minus mu rho ts of f0 dot. And that's seriously initial condition. And what is mu rho? mu rho times the integral rho of theta sine squared theta minus pi. And Rangini, by d and a tan, she was a student of Bonnetto. She actually showed that this rate is sharp. You can actually write down a state which behaves exactly that way. It's a function. You can take 1 over 2 pi. Then you get just 1 half because your average, right? Yeah. But you can also do it in general. We didn't publish the general case, but it goes exactly through. Good. So this is good news, you agree? You have this thermostat. You have the system which interacts with the thermostat. And it does precisely what you expect. Small systems coupled to large system, they really go towards equilibrium extremely fast, nicely with an exponential rate. And in some sense, that's what you would like to do in statistical mechanics, right? You're not interested in packing all these molecules in a corner and wait until this whole thing equilibrates. What you do is you make some local disturbances, and you ask yourself, how do these disturbances go back to equilibrium, right? That's what you really would like to understand. OK. So then let's see what we can do. So you see, what I've assumed tacitly, of course, never, not tacitly, it's obvious, right? I've assumed that the thermostat is infinite. So you might ask the question, what happens if I assume that the thermostat is finite by large? So in other words, it's not a thermostat, it's a reservoir. Just to take a large system, finite, you couple it to the small system. What can you say, OK? I mean, what you would think is that this new interaction between the finite system and this reservoir, when the reservoir is large, should very much resemble somehow this system here, right? So we have to write down some interesting interaction. So let's do that. So we have a system of m particles, a reservoir of n particles. And of course, we're going to assume that n is certainly bigger than m, right? And now let me write down some notation. In the system, the velocity of the particles in the system are called in v, and the velocity of the particles in the reservoir w. So this is an accounted this way. It has some notational advantage here to go on from m plus 1 all the way to w m plus n. OK. And now what you do is you write down a master equation. df dt equals lf. Notice it's l and not l infinity. This infinity points towards the infinity of the thermostat, right? And what is l? And the l has a few terms. So the first term is lambda s m minus 1 sum i less than j m greater than equals 1, or ij minus identity. What does this describe? This is precisely your old Katz model of the system, right? The system, the particles in the system are allowed to collide. That's what they do with this term. And then you have, of course, the reservoir. The reservoir has also particles. And they're also allowed to collide with each other. And the way I'm going to write this lambda r n minus 1 sum i less than j less equals n plus m, greater equals m plus 1. Then, of course, I have an r ij minus identity. And notice, by the way, this is precisely the reason why I chose this notation. I don't have to distinguish the r ij, right? And remember what is r ij? It's just the old story where you take the ij plane. You take your function, and you average over rotations in that plane. That's all you do. And now you need an interaction between the reservoir and the particles. And the way you write this down is it's mu divided by n. And I have to explain this. A double sum i runs from 1 to nm. And the sum j runs from m plus 1 to n plus m. Then you have r ij minus the identity. That's what you have. Good so far. Now, OK, so let me start over there. I keep this. So now what you do? You see, of course, you're not going to choose any arbitrary initial condition. That would be silly. Five minutes, right? Yeah, so I have another five minutes, right? Yeah, yeah, I'm sure. I take this into account, right? OK, perfect. Yeah, no problem. So what is the, so I forgot what I wanted to say. Exactly, initial conditions, right? So you see, if you choose, you're not going to choose arbitrary initial conditions. What you would like to say is that initially the reservoir is in equilibrium, right? So you choose initial conditions which look like this. f0 of v and w is equals some function fv times e to the minus. And by the way, allow me. This is a little bit silly. But I always use beta equals 2 pi. The reason why I want to do that is when you agree, you can choose the temperature anywhere you like. It's just a constant, right? You measure it in terms. I measure it in terms of 2 pi. I hope you don't mind. Why? Because then the Gaussian, this Gaussian is already normalized. And I don't have to put these pre-factors here which annoy me because I will always screw it up. So we keep it that way. Good. So now, you see, when you take this initial condition and you evolve this initial condition under this time evolution, things get kind of interesting. Why? Because when you wait, the reservoir will not stay in equilibrium. Why not? Because the reservoir will collide with the particles in the system and I didn't assume anything about this initial condition except that it's a density. So the system doesn't stay in equilibrium. The reservoir doesn't stay in equilibrium either. But still, somehow you would expect that the evolution of that guy is close to the evolution somehow of that one. So let me explain why you can expect that. Look at the collision rate. Why did we choose mu divided by n? That means that when you take a particle out of your system, particle out of 1 to m, the rate of collision, because it can collide with n particles out of the reservoir, is mu. It's independent of n. When I take now, however, a particle out of the reservoir, what's the rate of collision? Well, a particle of the reservoir can collide with m particles in the system and that gives you mu times m divided by n. So mu m, so let me write this down here, m divided by n. Now you see what this is telling you. When the n is very, very large, the rate of collision, this particle feels in the reservoir, it hardly collides. The rate is very, very small. So therefore, you would expect that this should work out somehow, that this thing is very, very close. And in one minute, let me just write down a theorem. Namely, there's a certain distance, e to the l infinity f and so gamma n. So gamma n, remember, this is the Gaussian. I take the time evolution here of this system with the same initial condition f here. You see, here, I just answered this answer that I can compare this function with the function over there. And then I take the solution of e to the l t f0, with that f0. And this is less than c of f. This is order 1 times m divided by n. And I should say uniformly in time, and what is d, d of f and g, is the soup over all k, not equals to 0, the Fourier transform of f minus the Fourier transform of g divided by k squared. Now, and this is called the Gabetta Toscani-Wenberg metric. And it's very popular in this field in Boltzmann equation. I mean, it's a little bit fast what I'm telling you here. This metric has wonderful properties. And what we can show is that this time evolution, which is the infinite reservoir, the thermostat, with the finite reservoir is controlled by a term which depends, of course, on the initial condition uniformly in time m divided by n. That's a hard one. That took us a long time. So that proves that this intuition, which I told you, is really correct. Now, just one remark. If you believe that, wouldn't you think that the entropy of such a system should also decay nicely? And that costs some problems for us. And next time, I will tell you a little bit about that, that it actually is not so bad. All right? Good. Thank you. So sorry, Michael, for having to push.