 So let me sort of recall a little bit what we have been doing last time. So here is just a little summary. So we're talking about this Markov evolution equation, right? F0 is a density. And it's a density on the sphere Sn minus 1 with radius squared of n, probability density. And then you evolve the F0 with this operator where the Q is given by this average over scatterings. And the scatterings are done with the Rijf, where you replace the i's and the j's variable by the post-collisional velocities. And you average with respect to probability measure, rho theta over the angles, right? So it's an extremely simple model. And last time we talked about the entropy. So we looked at, in a way, we showed last time that there is an entropy structure for this problem in the sense Laurent de Willet explained. So we introduced dissipation, which is given by the integral of 1 minus Qnf log f, and n in front. And always, yeah, maybe I should say, unless I say anything else, an integral is always an integral over Sn minus 1 squared of n d sigma n. And this d sigma n is uniform, normalized. We also introduced the following quantity, gamma n. So by the way, so what is this? I should maybe say. This dissipation comes out by taking the entropy, integral f log f. And what do we do? We differentiate the entropy back to t at t equals 0. That gives you precisely d of f 0, the dissipation. And I forgot there's a minus sign. So that's what we showed last time. And then we introduced what is called the entropy production, gamma n, which is d of f divided by s of f. But this time, we take the infimum over all densities with integral equals 1. And then we talked about Villain's theorem. You see, what you would like to have is, you would like to in some sense compute this number, and then you sort of hope for the best, right? You hope that this number is not terrible in the sense that it goes to 0 as n goes to infinity or something like that, right? And this story is really bleak, because first of all, we have Villain's theorem, which says that gamma n is greater equals 2 over n minus 1 when rho is equals just 1 over 2 pi. And we have also Inov's theorem for any theta positive, there exists theta so that gamma n less or equals c theta divided by n to the 1 minus a. So in some sense, this was actually, Villain conjectured that this is more or less the right behavior in n, and more or less that's proofs. Anyway, but let's try, what I wanted to show you is how to prove that theorem, because it raises a bunch of interesting mathematics. We did last time, we showed, we started out with an induction argument here on this problem in terms of n. We inducted in n. Induction worked like this. We were able to estimate it, sorry. Now, by gamma n minus 1. So it should maybe put an n here. This would be better, I think. Because it really depends on n, that's of course the point. And here we got n over n minus 1, the entropy, and then comes unfortunately another term, which is gamma n minus 1, 1 over n minus 1. And here comes some k p k f log. What is p k f? There are two ways of thinking about this. On the one hand, you can just integrate f over the normalized measure on the sphere s n minus 2 square root of n minus v k squared. So in other words, what you do is you fix the variable v k. That gives you a sub-sphere when you cut your sphere at the level v k, and now we integrate f over the normalized surface measure of that sphere. And that's a projection. Another way of thinking about the p k f is by just thinking, what do I do? I take my function and I average it over all rotations which fix the v k axis. That's the same thing. All right, so in a way another third way of saying what this p k f is, in a certain sense it is a marginal. You take your density, and you take the marginal with respect to all variable except the variable v k. But it's a little bit complicated because these variables are dependent. OK, so now the theorem, so in some sense what is our goal? Our goal is to find an estimate on the ratio of this number divided by the entropy. So we have to estimate this right hand side in terms of the entropy. This is a theorem. So this is theorem. Well, let's leave it here. So again, f in L1 of the sphere as n minus 1 squared of n, integral f equals 1. And then the sum k equals 1 up to n, integral p k f log plus or equals twice f log. OK, so that's what this result is. And this is a result which was proved by Carlin. And so let me make a few comments about this result here. It's very easy to see that the number two is actually sharp. That's easy. And the way you think about this the following, let me draw it by pictures. And of course, it's very hard to put pictures in n dimensions when n is 10 to the 26. But morally, it works out. You see, what you're going to do is you take a function, so here's the north pole, if you like. That's the one axis. And now what you do is you make a very high characteristic function over a tiny little set, but you integrate this to be 1. So this function has a certain height h. And this has a certain area, a, which you call epsilon to the power n minus 1, because the epsilon is roughly the dimension of this patch. Remember, this here is Sn minus 1. So what do we need? We need that the integral of this function f should be 1, so that forces you, remember, epsilon is really, really tiny. So this is flat, basically. So you get that h is of the order epsilon minus n plus 1, and the a, as I said, is of the order epsilon n minus 1. And you notice this function is, of course, radially symmetric. In other words, when I apply the P1 to it, it just gives me back the same function. Remember, this is the one axis. So when I rotate this function, it stays invariant, so in average, it stays invariant. Now what happens when you take Pkf, when k is not equals to 1? Well, you notice that, so you have an axis which goes now out this way, but what you know is that when you average this thing, this stays always a good portion away from the north pole, and it stays a good portion away from the south pole. And this has nothing to do with the dimension. That's just the way it is. So what you get is that this Pkf is a function which, essentially, when this is very, very small, is a characteristic function now. But what it is, it's a band on the Sn minus 1. Band, right? And what is, so Pkf is, again, a function of height h area a, little a. What is little a? Well, this time, the little a is of the order of epsilon, because this is a band, right? And therefore, because the whole integral of this function should be still 1, the h must be of the order epsilon to the minus 1. So that's what this Pkf is. So now let's go on and check out these terms here. So you notice that the integral of P1f log p1f, well, that's, of course, just integral f log f. And you see that eats up half of this 2 here. And now you have to ask yourself what happens with the Pkf's. Well, the Pkf's, what is the entropy? Yeah, by the way, what is the entropy here? We should actually compute it. Well, that's easy. Why? Because when f is equal to 0, the entropy is 0, right? f log f is 0 when f is equal to 0. And on the top here, this is just log h times the integral f, which is 1. So that's just the logarithm of epsilon to the minus n plus 1, which is minus n minus 1 log epsilon. I think you would agree with that, right? That's what this entropy is. And now, what is the entropy of integral pkf log pkf? What do you get? Well, this is, again, same reasoning. It's more or less a characteristic function with tiny little arrows, maybe. But what you get is that this is equals to, I should say, yeah. Well, the height is epsilon minus 1. So you get minus log epsilon times the integral of pkf, which is 1. So now you see what happens. How many such terms do you have? n minus 1. And so when you add this up, you remember the log epsilon gets very, very negative because epsilon is small, right? So this behaves roughly like minus 2n minus 1 log epsilon, which is twice this f log f, right? You see, it's the simple fact that you get a 2 here is because you eat it up half of it because this function already is f log f, right? This is very different when you go, for example, into rn. And let's make this point here. It's a very elementary point. Suppose you have a density f on rn. And you define the marginal mi of f. I mean, this is now totally trivial, what I'm going to explain to you, huh? So this is a density, so in other words, the f over rn. And mi f is simply defined as the integral of f of all its variables, v1. And now the vi variable, well, I don't even have to write this down. I'm going to write dv1, dvi hat dvn. And what does this mean? It means that it did not integrate over the ith variable. Yeah, no, no, I mean, this would have to do with the proof of that inequality. So physically, what this means is we have to maybe ask ourselves, what are the bad configurations for the Katz model? And the bad configurations are precisely this, where you have a substantial part of the mass centered at the poles. And that takes a very long time. Imagine what this means. Suppose you stick all the mass at v1. This means that only one particle has all the mass and all the others. Yeah, exactly. That's right, that's right. That's precisely the intuition, right? Because you concentrate too much into a very small system, and the other system which has no energy, and then it takes forever for the collisions, right? Yes, that's right, exactly. And this is what I'm going to talk about. OK, yeah. I mean, this isn't, in some sense, the question, what kind of reasonable states can you write down which avoids these problems? That's going to be the exact. Very good. Now, thank you for making this remark. This is good. OK, so let me now finish this elementary example. You see, I claim now, and this is totally elementary to show, when you take MIF log MIF, and the sum on i, i equals 1 to n, this is always less than the integral f log f. And the proof is extremely elementary, right? Because what you do is you write this thing out in the following way. You take here the product of the MIFs, log f divided by the product of the MIFs, here, product MIF, then dv1 up to dvn. OK, this is what you start. And you notice what I've done. I mean, in some sense, I've written down the relative entropy of f with respect to this product, but I'm writing this by dividing here and multiplying here. Now, why do I multiply? The reason why I multiply is because this product here is a probability measure. And that's, of course, extremely elementary, right? Because when you integrate, these guys are independent now. So what you get is just n times the integral f. But the integral f is 1. So this is a probability measure, OK? Good. So therefore, what can you do? Because this function, x log x, is a convex function. You can use Rensen's inequality. And you get that this quantity is what? It's greater or equals the integral of this quantity against this measure, which is just integral f, which is 1, times the logarithm of this quantity integrated against this measure, which is again log f, integral f. But the integral f is 1, so this is equals to 0, right? It's extremely elementary. And unfortunately, such kind of arguments, it would be lovely to apply to this inequality. But we are not quite there because these two, we don't really understand. I mean, this example explains that it's good. But how does it come in? All right? Great. So let me just check that I have. Yeah, let me also mention that Cedric has a particular conjecture about this. And this conjecture says, if you assume that the initial condition has fourth moments of order n. So let me write this here in this corner here. So suppose the integral of f0, sum of the vi fourth, is less or equals to constant times n. So what does this mean? You see, what is vi? Suppose you concentrate on a pole, then the vi squared is of order n, right? Because you see that you're on the sphere of radius squared of n, the vi, when I concentrate on the pole, is squared of n. When I raise it to the fourth power, I get n squared. So what this does, in a certain sense, it depresses the function f0 near the poles. And his conjecture is, if you assume that, you should get entropy production at an exponential rate. You know that, right? That's his conjecture. So you should have exponential decay in entropy. And by the way, let me mention this argument which I gave so far is not going to work, because you would think that you could improve this inequality under this assumption here. You cannot. That's unfortunate. Because you see, I can just take a tiny little fraction here, a fraction 1 over n. Still, the entropy would be huge, would still be log epsilon. Because you notice these powers just go along for the right. So you cannot beat this inequality. And it improved that. All right. Good. So let's go on and start thinking about how to prove such kind of problems. Yeah, let me finish, of course, the argument. Sorry. So we have now this inequality here. So let me go on and finish the proof of Vellani's theorem. You see, we have now an estimate on this which says that this is less or equals twice s of f. So therefore, we get that this gadget is greater or equals gamma n minus 1. Here you have an n. Here you have a 2. You have the same denominator. So you get n minus 2 over n minus 1 times the entropy s of n. And now you just iterate this. And you notice I computed this last time that gamma 2 is equals 2. And you'll learn that gamma n is greater equals 2 n minus 1. That's easy. All right. OK, good. So now let's go back to this kind of, I would like to call them entropy inequalities. These kind of inequalities show up very, very often. And therefore, I would like to sort of expand a little bit on it, because that's kind of useful. Let me first start with a very general remark. Again, it's completely elementary, but quite useful. Let me note this here. In this little example here, I used Jensen's inequality. And Jensen's inequality you can actually use to prove the following lemma. So x mu is just a measure space. And you have a density, f density d mu. Then the integral of f log f of x d mu is equals 2. The supremum over all phi's of the integral of f phi over x minus the log of the integral e to the phi d mu. So this is what you call a Legendre transform, right? In some sense, this is the dual function. This is the function dual to that function here. Of course, I have to assume that the integral of f is 1. And the proof is quite elementary, because what you do is you pick any phi, and you form e to the phi divided by the integral e to the phi. Of course, you have to make sure that these things are integrable, right? Otherwise, it doesn't make any sense anyway. And let's call this function psi. And then you do precisely what I did before. You compute f divided by psi log f divided by psi mu. Notice again, this is a probability measure. That's because the way it's cooked it up. Therefore, this is positive. Can you still see? And now you just work this thing out what it is, right? When you pull things apart, one term is integral f log f. Then you get, you see this psi cancels out. And then you get from this one over psi, you get a minus integral f phi. And then you should not forget the normalization here. The normalization comes up. That comes with a plus. And that's positive, right? So that shows that this quantity here is always greater or equal to that quantity here. Now we have to check that the supremum is actually equal. That's maybe not important, but it's useful to do. Well, what do you do? You choose to phi cleverly. And when you choose phi to be the logarithm of f, you get equality. So this is a useful device, because in some sense, it reduces proof of any kind of entropy and equality always to the computation of such an integral. And there's a vast technique behind computation of such integrals here. So let me erase this now because the next few, which shall I say the next half hour, I'm not going to talk now about cats. I'm talking more generally about inequalities of a certain type. They are known as brass-cumblebe inequalities. Now let me just write down an inequality which is fundamental and will actually give us this result when we just apply this lemma. Inequality is the following. Again, Carlman. And it's completely disgustingly easy. When you write it down, the proof is a little bit more tricky. So imagine you have j functions, and you evaluate each function at the coordinate vj. So you have to imagine, right? You have this sphere n minus 1, n minus 1 dimension. But you have given n functions. And these are functions of a single variable. And now what you do is you take the product and you integrate it over the sphere. So let me again write it here. It doesn't really matter what the radius of the sphere is. Point is that the measure should be always normalized. OK. So now what would you like to do? You would like to estimate in terms of some LP norms. What you can show is that this is less or equals the product, j equals 1 to n. Here you put the fj 2 norm on the sphere. This is on the sphere. You see, each function is a function on the sphere. You can look at it this way because the vj, right? There's a coordinate on the sphere. So you integrate this function, you square it, and you integrate it on the sphere. The constant 1 is sharp. You cannot do better. And the 2 is also sharp in the sense that you can replace the 2 by something bigger, but you can never replace it by something smaller. And why? Because if you could, this inequality couldn't be true. It would be true for a smaller number. Anyway. So these are what is called a new version of the brass complete inequality. We'll explain it later what this is. So let's try to understand how we get the entropy inequality from that. And you see, this is precisely the point. Because what you do? What are we going to do? So proof, what was it? Let me call this theorem 1. Let me call this theorem 2. So what you do is you're going to choose the fj to be vj, our choice. So we're going to prove right this better, OK? And now what you do is, how are you going to apply this lemma? But you have to ask yourself, what should the phi be? Let me think. I want to check that it didn't screw up the computation. So look, that's right. You're going to read this inequality in a slightly different way. You're going to put it here as a square root. Why do you put the square root? Because then you get the integral of the product of the square roots of the pjf is less or equals the product of the L2 norm of the square roots, which is nothing but the integral of pjf to the power 1 half. But these guys are 1 anyway, you agree? So this guy is equals to 1. So what you know automatically is that this is a probability measure, right? OK? Good. So therefore what we're going to do is now, we're going to choose phi to be what? Phi is going to be, so sorry, e to the phi is going to be the product of the square roots pjf. And now you go to this lemma. What do you know? Well, you know that e to the phi, the mu, we just showed it. Well, it's maybe not a probability measure, but I know that the integral is less or equals 1, right? That's what this inequality tells me. Therefore the logarithm is negative with a minus sign is positive, so when I drop this, I get a lower bound. Agreed? But this here is less or equals 1. This is negative with this minus sign is positive. When I drop it, I get that this quantity is greater than or equals that quantity. Forget about the sum, OK? So now what is that? So all I have to do is I have to compute the integral xf phi. So what is it? Well, phi is the sum with a half in front of the logarithms of pjf. So therefore we know by this lemma, lemma, what was it called? Just lemma. The integral of f phi is equals the integral of f pk f sum on k with a half in front. Sorry, log pk. OK, so we are therefore at the stage that we know, because of this inequality there, that the integral of f log f rate equals the sum with a half in front, k equals 1 to n. Integral f log. And now you realize, because this function depends only on the variable vk, you can actually average this f freely over all rotations which keep the vk axis fixed. So you're free to put here a pkf. But you see, you have this one half in front. And that you push over here, and that proves the theory. This is how these things work. So in some sense, that's this lemma which helps in this way. I mean, you can also do it simpler by some elementary Jensen, but I would like to really push this lemma a little bit. This is an idea which was used heavily by Carlin and Cordero Arauskin proving entropy inequalities. OK, good. So we have now this theorem 1. That's great. So the question comes how you prove something like that. And here things get a little bit hairy. Let me tell you what is the idea. So you see, very often, what do we do? We use entropy inequalities and all these kinds of things to show that certain flows converge to equilibrium. And however, there is also a way. So what is it? It's the idea of a Lyapunov function, right? Because the Lyapunov functions tells in general something about the flow near the equilibrium. And what you do now here, you can also do the opposite. You can take an inequality which you try to prove, and you try to invent the flow for which this inequality is a Lyapunov function. That's quite a fruitful idea. This has been done in many, many circumstances. And you can also do it in this connection. So let me explain this. Now, of course, you can always say, my goodness, how you got to these ideas. And of course, this is simply experience, right? You try whatever you can. And sometimes it works, and sometimes it doesn't. So let me explain. So how to prove theorem 2. So what you're going to do is the following. You're going to replace, first of all, let me rewrite this inequality in the following way. I would like to put here square root of fj. I hope you don't mind. I just replaced the fj by square root. And then this inequality just looks this way. Product integral fj to the power 1 half, right? So here's the idea. We take fj, which is a function of vj, and we replace it by e to the Laplacian t f of vj t Laplacian t fj of vj. And the Laplacian is just the Laplacian spectrometer of the sphere as n minus 1. Well, how do you think about it? I like to think in this way you sum over all alpha and beta l alpha beta squared. What is l alpha beta alpha d beta minus v beta d alpha? This is, you see, these vector fields, so to speak, are tangent to the sphere. That's the point, right? Because when you take a radial function and you differentiate, then you always pull down a v beta, and here a v alpha, these things cancel out. Or put differently, these are just angular derivatives in the alpha beta plane. f of vj, and I should say an fj here. It's f, it's just f, there's nothing else, OK? Yeah, no, no, this is good, this is good. So by the way, who has seen this kind of notation? Nobody. I mean, this is what physicists do. And it's a very useful way of thinking about, for computations, OK? Really useful. OK, so now, so the Laplacian is this gadget here, right? And now you have a semi-group that's very nice. I mean, there's no problem in defining that. And what you do is you apply this e to the Laplacian t to this fj. And now you notice that this is still a function only of vj. Why? Well, because the Laplacian commutes with rotations. So therefore, when you take any rotation which fixes the vj, you can commute it through the e to the Laplacian t. And here it doesn't do anything. So therefore, this function can only be a function of vj. Good. So that's number one. Number two, why do I choose it? Well, it turns out, when you look at this integral over the sphere, the integral of f of vj t over the sphere is just the integral of fj. Why? Because the e to the Laplacian is a self-adjoint operator. And when you push it on the constant function, it preserves constant functions. So you see, you have already gained something. Namely, you have a flow which preserves the right-hand side. The right-hand side doesn't go anywhere. It's fixed. And what about the left-hand side? Well, let's analyze the problem as t goes to infinity. As this guy goes to infinity e to the Laplacian t fj, what does it do as t goes to infinity? Well, as t goes to infinity, all the higher modes get damned out. You see, the eigenvalues of Laplacian are all negative. The first one is 0, and all the others are negative. So they die out exponentially fast. And therefore, what you left over is just the integral of fj of vj. That converges to fj of vj. So now you see, if you look at this, if you replace these fj's by these time-dependent functions, what happens to this in the limit as t goes to infinity? Well, the fj inside, you have to replace by the integral. Then the product of these square roots, this is just a constant. This integral is normalized. You get precisely the right-hand side. OK? All right. So how then can I prove this inequality? All I have to show is that the left side is an increasing function in time. And when I have done this, when I've proved this, I'm done. But to show, hence, we have to show the left side is an increasing, or shall I say, let's put it this way, non-decreasing function in time, a non. All right, so what you have to do is, how do you check this? You just take the derivative and check that it's non-negative, right? So that's a computation. I don't need this lemma anymore. So we have to compute d dt product j vj. Good. So now, OK, there are many ways to compute it. But the most convenient one is, notice, this operator here, e to the power of t, the fj is a non-negative function. In fact, this semi-group is actually monotone, sorry, positivity-improving, right? The function is not going to be 0 anywhere when you start. This is essentially, what is it? Harnack or something, right? It's fairly obvious, OK? Standard, let's put it this way. It's not obvious, but standard. All right, so now, what you do is, write the fj as e to the phi j. And now, you compute. So now, we know dfj dt Laplacian fj, right? This then implies that d phi j dt is Laplacian phi j. But then you get the correction term, which is the gradient of phi j squared. And you have to remember what these things are. This Laplacian is the sum of this l alpha beta squared. By the way, alpha and beta are just any indices, right? With a half in front. And what is this guy here? This one is just l alpha beta phi j squared sum alpha beta with a half in front. That's what it is. OK, so now, you compute that, right? I think I shouldn't do this on the board because I surely make some mistakes. Yeah, that's right. So when you differentiate this, remember, when you plug these guys in, you have here a half in front, right? When you plug these guys in here. And then you have to sum on the fj. So when you compute this derivative, what you get, you get that this is equals to 1 half the sum on j, the integral grad phi j squared minus 1 quarter sum j and m, the integral grad phi j dot grad phi m. And here, I put here a d mu and a d mu. What is d mu? d mu is nothing but e to the 1 half phi times d sigma n. So I just absorb d e to this exponential here, which is an exponential. I just absorb it into the measure because now, at this moment, I don't do any differentiation anymore. What I have done, however, is that this laplacian, I have made an integration by parts, which gives me the second term. So all you do is you try to get down to first order terms. Good. And now, notice something. When you have an L alpha beta on phi j, remember, this function phi j, what does it depend? It depends only on the variable vj. So the only L alpha beta is where the alpha is either j or the beta is j. These are the only contribution which you get. And now, you use this very carefully. And what you notice is it's not a miracle. It's just due to computation that this gadget is a full square. Namely, what this is, it's 1 half. Yeah, no, I get 1 eighth because there's no point in doing this computation because I'm anyway behind writing this way. It's a full square. Oh, you cannot see it. Let me write it up here. 1 eighth, sorry, sum of beta. And that's, of course, clearly positive. So therefore, this function as a function of time is increasing. In the limit, it converges precisely to the right-hand side. And you already understand what are the optimizers. The optimizers are the constant functions. So this way, you can prove such type of inequalities relatively easily. And you see, you think this is a trick. Well, not really. You see, what you have to imagine in this business is always, what could the optimizers be? And the constant functions are a reasonable assumption that they're going to be the optimizers. And then you say, aha, so what kind of flow converges to a constant function? Then, of course, the easiest one to take is, of course, the heat equation. So now next time, I'm going to expand on this. Namely, there is a whole class of inequalities which we're going to use, actually. They are called the brass-combleep inequalities. And they will take only a special case of those. So let me sort of give you a sort of a general view of such kind of inequalities. Suppose you have a bunch of functions, Fj. And we're going to be just on Rm. Suppose you have a bunch of functions, Fj. And a bunch of matrices, they go from Rm to some Hilbert space, if you like. And this is a subspace of Rm. And I'm just saying, you see, I don't want to specify that it's Rk or whatever. They can be crooked, right? They can just hang in there. These are subspaces, OK? So these are subspaces of Rm. All right, so now you can ask yourself, well, suppose I take a product of these functions evaluated Bjx. X is in Rm. And I would like to bound this by, say, the product of certain LP norms of the functions in their own variable, H, I, P norm, so LP, OK? Can you prove something like this? So what would you have to do? And then, of course, there's a constant, right? And this constant will depend on the B's and everything. So what you would like to do is you would like to actually optimize this problem. J equals 1. I mean, this is K. It doesn't matter. Fj, I forgot to say, K here, LP. And what you would like to find is what is the supremum of that gadget over all such functions, OK? So that's the problem. Now, you agree. I mean, this is unsolvable, right? And what Braskham and Lieb, actually, it was later Lieb mostly who realized this, said, well, you see, instead of the theorem is, instead of optimizing this overall functions, it suffices to optimize only over Gaussians. In other words, all you have to do is you take the Fj to be Gaussian functions with some, you know, what is a Gaussian function? A Gaussian function is a quadratic form in the exponent, yeah? No shift's nothing. You just plug these Gaussians in. We know how to compute the integrals with Gaussians. And then you optimize. But of course, when you compute this integral with these Gaussians, you get, of course, extremely, exceedingly complicated functions in terms of the Bi and the variances of these Gaussians. These are usually unsolvable problems. But this is this theorem, and that's a fantastic theorem, because we know how to compute with Gaussians, OK? Now, let me give you now a rigorous version of that theorem where you actually can compute the constant. And that's a theorem which comes from convex analysis, believe it or not. This all that stuff has beautiful applications to volumes of convex sets. So let me write down this theorem. Now, this is Raskamp-Lieb. I should also mention Keith Ball. By the way, at the end of the course, I give you a list of references about all that stuff, right? I think that's presumably the most useful thing about the course at the end. And also Frank Bachte. So now what is the theorem? Let me make sure that I don't make too many mistakes. So for i equals 1 up to k, that h i in Rm p subspaces of dimension, let me call the dimensions, p i from Rm to h i, linear maps with the property p i transpose the identity on h i. So that denotes the identity map on h i. So it's not the identity matrix, right? Because the identity matrix would always refer to the basis standard basic of Rm. I just call it the identity on the Hilbert space, OK? Next, assume further there exist numbers, non-positive numbers, you should say, i equals 1 up to k. So that is this important relation, sum c i, i transpose bi. Notice now, i, here is bi transpose bi. This, by the way, says that bi is an isometry. And here we have bi transpose bi, OK? That's reversed. What should the sum be? Let's assume that the sum is now the identity on M. So that's really the identity matrix in Rm. Any collection of non-negative functions, i, from, I should say, h i into the reals, i equals 1 up to k, the integral, let me write it this way, product i equals 1 to k. The integral over Rm, fi of bi x, x, and here erase it to the power c i, is lesser equals product i equals 1 to k. The integral over fi, now over its own Hilbert space, to the power c i. And you notice here now, I have really a constant. The constant is 1. And that's sharp, as best possible. What are the optimizers? Well, I don't have time anymore. But next time, I will show you that the optimizers are Gaussian. That's very easy to see. That follows immediately from this relation. And in fact, I will also try to convince you that this is an inequality which you surely have seen in certain versions, namely, it goes back to an inequality by Loomis and Whitney from 1948, which you by the way use in the proof of the Sobolev inequality, namely, the L1 norm of the grade in this grade equals the n minus 1 norm of f. You use that inequality implicitly. So I show this because it's a very simple example. And then I try to give you a proof, and I would like to apply this now to a new topic in the Katz model, namely to thermostats. It raises some interesting mathematical issues. Good. So then I'll see you tomorrow. Thank you.