 Okay, so we can start this morning session with Professor Los from Georgia Tech and his second lecture is entropy and the cac master equation. Thank you very much. Is the speakers are okay? Good. So last time, let me start out by writing down again the cac model, the master equation really, but I like you to call it cac model. So what was it? We start with an initial condition F0 in L1 of Sn minus 1 radius squared of n. Okay? So again, remind you this is the sphere in Rn. So it's an n minus 1 dimensional sphere and the radius is square root of n. You see, the way I talked about it last time, n is the energy of the system and we assume that the energy per particle is 1. So that's a reasonable number. Okay? Then what is the evolution of the master equation is the following equation, DTF equals n times qn minus the identity F and the initial condition F of V and 0 is F0 of V. Okay? And let me remind you V, I call V1 up to Vn. These are the velocities, huh? What is qn? qn was this operator here. By the way, can you read my handwriting? Okay, good. It's a sum of collision operators, if you like, over all pairs. So there are n choose two terms here, right, and I normalize it with n choose two. And the Rijs are given in the following way. You apply it to some function F. And what is it? You take your function, you put here Vi star of theta, J star of theta n, you integrate rho of theta, d theta, from minus pi to pi. And these are just the post-collisional velocities. Let me remind you Vi star of theta, I mean, yeah, okay, let me write it down. Cosine theta Vi minus sine theta VJ and VJ star is sine theta Vi plus cosine theta. Okay? Now, rho of theta is a probability distribution. The best thing, it's not going to matter very much. Let's just assume that rho of theta is uniform, right? So that makes life easier. And also, it makes it easier for me to write things down. Okay. So, this is the model, right? And last time, I explained to you a little bit that if you assume that this F zero is a chaotic sequence, right? Remember, this depends on n. You could, right? You could let the n go to infinity. And when you assume that this is a chaotic sequence, that means that the F zero becomes roughly independent when n gets larger and larger, right? Then this kind of property of chaoticity is actually preserved under this time evolution, right? And we also know that the single particle marginal satisfies the nonlinear Boltzmann equation. So in other words, this ties this model, this Katz model, which is a model about collisions together with the Boltzmann equation, which is also a model about collisions. Katz model is in very high dimensions, the Boltzmann equation is in very low dimension, but nonlinear, okay? So that's the framework. Good. Now, one motivation of Katz to investigate this model was because you can study approach to equilibrium, okay? So let's talk about approach. So what is this? Now you notice this Qn, what it really is, it's an averaging operation, right? It averages things. So for example, when you stick an F to be the constant function, by the way, I should say the measure here, sigma n, is normalized. In other words, whenever I integrate over the sphere, I have to divide by the surface area of the sphere, right? That's my convention. And the advice is good because you see right away that the function one is a probability distribution on this space. You also notice when you stick in one in here and you average, you get one. So this operation preserves the function one, okay? In other words, when you stick it in here, you see when you stick in initial condition function one, this is zero, the function one is an equilibrium state. All right. So now you ask yourself, well, what are all the possible equilibrium states? And there's a little lemma. It's just a simple computation, which is the following, times n minus one times F identity minus Qn. I have to explain what this means. This equals the integral minus pi to pi d theta. Now here you put F of v1 vi star of theta j star of theta minus F of v1 up to vn squared, okay? So what is this? You notice by the way, I haven't told you yet, this distribution is of course a special property. First of all, it's constant, but it's also symmetric, right? Namely, rho of minus theta is equal to rho of theta. And what this means, it means that when you look at this Qn as an operator on L2 of this sphere of radius squared of n, again with the same measure, then this operator is self-adjoint, okay? And we call this microscopic reversibility, that's an important consideration if you want to use any kind of spectral estimates or anything of that sort. You see, as soon as you violate this condition, this thing is not self-adjoint, things get a little bit tricky and hairy, right? Okay, good. So now, what is this? This bracket here is just the inner product on my Hilbert space, right? Just the inner product on this L2 space. And what this is saying, so this is a simple computation, right? It's an extreme elementary computation. Namely, what you see here is that when you square this out, the F with itself, because this is a rotation, you can undo the rotation because the L2 space is invariant on the rotation. Then you get twice the integral of F squared minus the cross term, but the cross term, sorry, I forgot to sum on i and j, the cross term gives you just precisely the Qa, okay? So now what you see here, well, let me stick in the row theta for good measure, is this, if F is an equilibrium state. In other words, if this vanishes, then this side has to vanish too, which means that if this row of theta is halfway reasonable, I mean, what do I mean by halfway reasonable? For example, when you have your 0 and your theta, you would like to have a distribution like this, so you have lots of rotations. You see, if you put in a delta function at the origin, then of course you just have identically 0 and nothing happens. So now, make a little distribution. Now, what does it mean that this thing vanishes? This means that there's a ton of rotations in each plane for which this function is rotation invariant, which means, of course, that the function is constant, okay? That's a consequence, all right? So QnF equals F implies F equals 1, okay? All right. So now, what does this mean? Well, this means that there is approach to equilibrium. So maybe, let me leave this here, let me wipe this out. Maybe I'll do this over here, I wrote this here. So e to the n, Q minus i and converges to 1s. Well, how do we see that? You can go into the spectral representation of Qn. After all, we know it's a self-adjoint operator, right? And last time, I also explained to you that this operator has discrete spectrum. This was an argument which is sort of set in words by using spherical harmonics. Again, let me maybe repeat it. Since this operator is made up of rotations here, these rotations commute with the Laplacian on the sphere, the Laplace Beltrami. Therefore, the Q also commutes with the Laplace Beltrami. And therefore, you know that the subspaces consisting of spherical harmonics is an invariant subspace for Q, okay? And therefore, when you restrict your Q to each subspace, you have a finite dimensional problem. And therefore, you have this discrete spectrum, okay? That's easy. But you see, as I said last time, these subspaces, they can get large, right? Huge. So what do you know? Inside each subspace, the Laplacian is just the identity. The Q can be anything, okay? You don't know much. So now here, let me just look at the spectrum. So the first eigenvalue is a zero. We know zero is a simple eigenvalue. I just proved this, right? Because of the function one. And now let me order them in this way. And now what you can do is you can write down in L2 the following quantity minus the function one. L2, no, squared, right? So you evolve the function. You subtract the one. You ask yourself, does this go to zero? And the answer is yes. What you do is you write it out. Let me call the eigenfunctions uj. j equals zero to infinity. I can function. In fact, we know what these eigenfunctions are, right? They are spherical harmonics. But of course, I mean, it's not really saying much. Okay, so these are the eigenfunctions. And then you can expand this in terms of spectral representation. Now, you notice the eigenvalue that the eigenfunctions u zero is the constant one. This one doesn't show up because it's subtracted here. So therefore, you just have e to the minus lambda j t. j runs from one to infinity. And then you have, what is it, f in the product with uj. And now you notice that this is certainly lesser equals when you pull out the next one, the smallest one, which is not zero, right? And then what you left over is you just have to sum lift over. And that's of course nothing but the Fourier expansion of f minus one because I have a start at one. So that's what this is. And I forgot to do, put the two here because after all, I have the square here, right? So that's, so you see, nice. This goes to zero exponentially fast if you know what this lambda one is. The lambda one is positive. We know that, right? Okay. Good. So how big is it? So let me just check that it didn't cheat on anything so far. No. All right. So here comes the first conjecture of cuts. I think I can erase this lemma now. So cuts, so let me now call, okay, delta n, the spectral gap, one minus qn, okay? So in other words, this delta n was my lambda one, okay? But I call it delta n because I want to emphasize this thing can depend on n, okay? So let me write down the definition. Delta n is what? Well, it's the inner product of fn identity minus qn, f. You take the infimum over this quantity and you take the infimum over all normalized states in two and you want to make sure that f is not the function one. So you assume that this is perpendicular to the function one, okay? That's what it is. That's the gap. Yeah. Yeah, there's an n here. No, there's too many n's. No, thank you. And the norm is one, yes. Thank you. Okay? So that's the gap. And cuts, cuts as conjecture. That's, he did this in 1956. Delta n, okay. So let me draw a little picture here, right? So the story is this, here's zero. By the way, I forgot to say this operator q at one minus the identity is always greater than zero. It's a positive operator, right? That's easy to see. But this is always a positive now. Great or equal to zero, okay? It can never be negative, all right? So now you have here this delta n and so on and then you have other items, okay? And you see delta n is of course precisely the gap means what? There is no other eigenvalue in between. Okay? His conjecture was that this is greater or equal to some fixed number positive c independent of n. That's his conjecture. And what is interesting when you look into his paper, I mean he really wastes a lot of time and energy on proving this but he never succeeds. And then, so now you can of course say that the conjecture was proved in 1999. So you can actually have two takes on this. Either the conjecture is so difficult or the problem was so uninteresting, right? So it was proved in 1999 by Elise Jean Rez. Proved Jean Rez, okay? And the proof is essentially a version, uses a method which was invented by Hong Zoriao named the Martingale method, okay? That's, it's a complicated thing. Now when you think a little bit about it, in 2000 delta n equals one half n plus two over n minus one. So in other words, you can actually compute this explicitly. Now you see, just, I'm not going to prove it. It's actually not very hard but I will make some remarks. You see, it's fairly easy actually to figure out what the corresponding eigenfunction is. And the corresponding eigenfunction looks like this. It's a sum of vj fourth minus a constant which depends on n. And this constant is chosen in such a way that this function is orthogonal to the constant. It's an easy, I forgot the number. It doesn't really matter, okay? And this is a function which you can write down because you know more or less what this function is supposed to be on the level of the Boltzmann equation, right? So you write this down. And then you compute with this eigen, you compute the eigenvalue which, of course, it turns out it's an eigenfunction and so you compute the eigenvalue of this operator. You can do this explicitly. It's not a big deal. It's a little bit complicated, lots of indices. And what you get is this number. Of course, that doesn't tell you really whether this is the gap or not. And then you have to work, okay? And that's sort of the main part of the proof. It's usually quite easy to guess what these things are supposed to be. It's a little bit harder to make this, really show that this is a gap. In other words, there's nothing, no eigenvalue between zero and this number, right? But you see it's pretty convincing what turns out when n goes to infinity, this converges to a half. By the way, one half is precisely the eigenvalue which corresponds to the gap in the linearized Boltzmann equation, okay? The eigenfunction here when you take the first marginal corresponds precisely to the eigenfunction for the linearized Boltzmann equation, right? I mean you have this complete correspondence there, right? So I would say this is now in a very satisfactory way. I should also mention Maslin, David Maslin. He actually computed many, many more eigenvalues of this Katz operator using representation theory of permutations. Okay, so this is the story. Now, and by the way, I should also mention this of course is always assuming that rho is 1 over 2 pi. Because when rho is not 1 over 2 pi, you know, then things get much more complicated. All right. So now, I'm not going to prove this. And however, I would like to criticize this a little bit, the gap. You see, if you have a function in L1 on the sphere, let me just write L1, I mean otherwise, okay, make it clear. After all, it's a course. And let's assume that this function is chaotic. In other words, it's more or less a product of functions f of vj. It's approximate. And then you know, because it's a probability distribution that since the f is 1, and remember, the f should always be symmetric, I forgot to say. It's reasonable. Because this operator here respects symmetry and we do not distinguish the particles, right? Maybe I'll write this down here. F symmetric. Okay. So, now, you see, when you write this down, well, what you get? You get roughly the integral of f to the power m, right? And this, therefore, integral f should be 1. Now, let's look at the L2 norm. Integral f squared. Again, it's a little bit hand-waving, right? But I think it gets the idea across. Integral f squared, what is that? Well, it's more or less, morally, the integral of f squared to the power m. And now, you see, whenever you have a density whose integral is 1, then you know that the integral of f squared is strictly bigger than 1, unless f is constant. Okay? So, in other words, what this is, it's a constant bigger than 1 to the power n. And imagine n, you have to think, is the number of the order 10 to the 26, right? Like Avogadro's number. So, these numbers are longers, right? So, what I'm saying is, when you look at the Katz conjecture, the L2 norm, the Hilbert space, L2, is not really a good way of looking at this problem. Why? Because the L2 norm initially is e c to the n c, a bigger number bigger than 1. And then you multiply by e to the minus delta n t. And this is roughly of the order 1 half. You have to wait astronomical times until this thing comes down to a reasonable quantity, right? Okay? This is not the way to look at this problem. Okay? So, therefore, we need a replacement. We need a replacement for this L2. So, therefore, let me get rid of the spectral gap. That's not the appropriate way. It's kind of embarrassing, right? You work hard to compute this gap because everybody tells you to do so and then finally you realize it's actually useless, right? Okay. But anyway, don't get discouraged by these things. At least you'll learn something, right? So, I did at least. So, let's see. What we have to look at is entropy. So, now, the relative entropy, I call it S of F. Really, I should write it in this way because it takes the entropy with respect to the equilibrium state, which is 1, is by definition the integral. Notice it's actually the negative entropy. It's F log F. I'm a little bit pedantic. I hope you don't mind, but I want to make sure that this is understood that this is a relative entropy as n minus 1 squared of n. Okay? That's what that is. Okay? Now, what's the advantage of a relative entropy or entropy over the gap? Because if you assume that your function F is roughly product function, okay? Then you see that here you get the integral of this product, but here this turns into a sum. So, you see right away that the sum is of the order of n, right? Or approximate product functions S of F out of the order of n times constant. So, in other words, it's extensive, right? So, the entropy per particle is actually over the 1, and now you can ask yourself, well, can you have approached equilibrium entropy? So, now, the point here, of course, is again, can we come up with a reasonable exponential decay in entropy with a rate which doesn't really depend very much on the number of particles? This is known as church and Janis conjecture. Let me make sure that I have not. Now, you have to tell me whether I screw up the church and Janis. Is this right? Okay. This was a conjecture which was originally formulated by Church and Janis about the Boltzmann equation. I mean, the full three-dimensional Boltzmann equation, but of course it carries over to this model, too. And the conjecture is the following S of F of t. So, what is F of t for me is just the evolution of F0 under this Katz evolution, right? I denote it by F of t. It's lesser equals e to the minus ct S of F0. You notice when F is equal to the constant function, of course, then you get 0, right? So, this is a reasonable conjecture. And c should be essentially independent. So, that's this conjecture, okay? So, now here we come exactly into the same situation as we learned from Laurent de Villette, right? We have to sort of discover what is called an entropy structure for our problem. We have an entropy structure. So, let's see to that. What do we need? Well, the first thing we need is that we have a unique equilibrium state, right? And we do. I just proved this before. We have this unique equilibrium state, okay? The next thing is that you need the dissipation of this entropy, right? So, in other words, what you compute is the derivative ds dt, which is d dt of this interval here. And now allow me that I forget about the one, right? So, and this depends on t. And when you do that, you get two terms, right? The one term here is n times q minus the identity f log f. That comes from the derivative here. And then you get the derivative from the other term. That's the f of t, 1 over f of t. f of t is always positive times, I mean, it's fine. And then, and then what you get is n times q minus the identity. Of course, the q always depends on n, right? And what you notice is I'm not so eloquent as Juan. How did you say this? Jack or something like this, right? Okay, good. So, you cancel this out. And you see now this term, of course, is zero, right? Because that's the mass preservation. You can also see differently, you can think of it as 1 n q minus the identity f. And since q is half a joint, you can push it over. And on the one function, you get zero, right? So this is just mass preservation. So this is equals to zero. And let me put the minus out. All right. So this, let me make it really clear. Okay. So, so this is what we call minus t of f to keep the notation of Laurent de Villette. Okay? So, and now what is the next question? The next question is you need a relation between the entropy production and the entropy. Okay? And let me write down a theorem. So this is the following definition. I call it gamma n. What do I do? I take the ratio of the entropy of this dissipation. I divide it by the entropy. And then I take the infimum over all f's, which, of course, are normalized, right? Oh, let me, let me say it out there. Put it out there. So d of f divided by s of f. And then you take the infimum over all f's whose integral is 1. That's called this gamma n. And this gamma n is actually what you call the entropy production, right? It's, it's how much entropy gets produced relative to the entropy you have. And this is a notion which was popularized by MacKean in 1965. And I should also mention Carlin and Carvalho, who were the one, I think, who made in the 90s, 1990s. They had a whole bunch of papers that made a huge amount of use of this entropy production idea. So, so what would you think? What one should have now is a lower bound and gamma n, which hopefully is independent of n? Okay, so let me start out with the first theorem, theorem, maybe I'll call it theorem one. And that's due to Cedric Villani, I think 2003. And he showed that gamma n is greater or equals 2 over n minus 1. And of course I always do this for rho equals 1 over 2 pi. All right, so now, now, I mean, you can say who cares, huh? So problem is that this is actually more or less correct. Well, I wouldn't say sharp, but let me tell you what it is. Enough, when did you do it? I think this was, I think it was 2014 or something like, I forgot, I forgot, and it was my student, and I forgot, I mean, it's terrible. So he showed gamma, for any eta positive, there exists C eta so that gamma n less or equals C eta divided by n to the 1 plus C n independent of n. So in other words, I should say, what is it? No, minus. So in other words, what this, what this result says, Villani's estimate is essentially sharp. And I couldn't even imagine that even the constants are sharp. Okay, so nevertheless, I mean, you could say, well, this is a bad result or it's not useful. But what is, you see what it does, it points you to something. Namely, when you think about how Amit showed this, it's a complicated computation. It's very complicated. Because what you have to do is you have to find a reasonable trial state. And the way it does is you take 50% of the particles with very low energy and 50% of the particles have almost all the energy. And then you start letting them collide. And what these particles with high energy have to do is they have to hit these particles with low energy and start pumping them up so that they go towards an equilibrium, right? Okay? So now imagine when I talk about sort of the room in here, if you would take all these particles, all these gas particles and pack them in one corner and have a few of those left who bombard those particles with a lot of energy and hope that this really equilibrates within the reasonable amount of time. Time of order one, right? That's what you would like to have. I think this is very unlikely. That's my personal feeling. It will take a huge amount of time. And I think this points to the general problem about this area, namely, you see, these are phenomenological equations. You can write down lots of functions. But most of the functions which you write down don't make any physical sense. So what is it? Which functions do make physical sense? So, nevertheless, what I would like to do is I would like to give you a proof of Wilan's theorem because it involves some interesting technology and it raises a conjecture which actually Cedric made himself. Yeah. Just to understand, because you're mentioning this idea that half the particle with a lot of energy, but they're not localized. So this is homogenous in space, right? It's spatially uniform. Exactly. But still, you see what this means in this sphere, you have some very strange states which... Yeah, yeah, that's definitely because it has to be spatially homogenous with the particles somehow, half of them with a lot of energy. It's very, very unequally distributed on this sphere, right? And then you have to, what you simply showed, that you can actually push the gamma and very low, very small, okay? Makes sense. By the way, also let me mention something else. This is another interesting point which is related to the talk of Anton Arnold. You see, what does this telling you? It tells you that the entropy, when I draw a picture, so here's time, and here is the entropy S of t. What it says is that the entropy for very large N is very, very flat here, extremely flat. But you see, this doesn't really tell you that maybe after a certain amount of time say order one, that this thing suddenly can go like that. It doesn't tell you that. You would have to have other types of estimates, and it would love to see a function where you actually can see that the entropy decay is slow all the way to the end. I have never found an example. But the entropy production at the beginning, that you can make small. But it doesn't tell you what happens globally, right? So this is another problem which I have no clue how to deal with. Okay, good. So now let's start thinking about proving Villani's theorem. Make sure that I, yeah. So the first thing, let's do a little computation. And remember, I will always assume now that the rho of theta is equals uniform. It's 1 over 2 pi. Makes life just easier. Now let's take N equals 2. So what is the approach? I mean, I give you a proof which I worked out with Eric Carlin and Maria Cavallo, which is just based on induction. It's very elementary in a certain sense, and brings up some interesting mathematical issues afterwards. So let's start out with N equals 2. And then you know that df dt for two particles, what is it? It's twice qf minus the identity. But you see what is qf? f is a function which lives on the circle, right? Because it's two particles. And what does this q do? It just averages over the circle. Now when you average over the circle with a weight 1 over 2, with a weight 1, that's just the integral of f. But that's 1. So this is equals 2 minus twice f. And that's a simple differential equations which you can solve. And what is the solution? The solution is f of t is equals 1 minus e to the minus 2t plus e to the minus 2t0. It's completely elementary. I can just check. OK. Now you see, when you then look at the entropy at time t, you notice these two weights add up to 1. We know the entropy, the function. I don't have it here anymore. Let me write it down here again. The function f log f is a convex function. It looks like this. At 0, it's going to define 0 and here is 1. So that's x log. It's a convex function. So therefore, you can say, well, since this is a convex combination of the function 1 and f0, you can use Jensen's inequality and you see that this is 1 minus e to the minus 2t bounded below times s of 1 plus e to the minus 2t as f of 0. That's, of course, is equals to 0. So you learn that the entropy production, the dissipation, let me say this way. What am I doing? This is wrong. It's the other way. Inequality can only go one way. So this here is less or equals. This is 0. It's just a minus 2s f of 0 in the limit as t goes to 0, right? Because what you get on the right hand side when you subtract s of f0, you get e to the minus 2t minus 1. That gives you this kind of derivative here, right? Or put it differently, the gamma 2 is greater or equals. Well, what you have to do, you have to change the sign. So you get, and then you have to divide by s of f0. So you get 2t. That's completely elementary. All right. So now what you do, improving Wilhelm's theorem, how you pronounce Wilhelm? I have to Swiss German pronunciation. I hope you don't mind. So to prove this, you start doing an induction. Okay? Let me try to set this up. This one, I don't need it either. Yeah. So the first observation you have to make in this business, and that's also what we did with computing the gap, you have to use the symmetry. So when you look at qn, you can write this as 1 over n, the sum k equals 1 up to n, qn minus 1k. And I have to explain what this is. You see, what you do here, let me remind you, or have it here. This is the sum here, right? Now what is the qn minus 1k? I remove one particle, what I do, I remove the kth particle. I just remove it. Okay? So in other words, what you do now is, in the qnk, you have a sum i less than j between 1 and n minus 1, I mean, without this i k, the kth particle without k. And you have to divide by 1 over n minus 1 choose 2. Now you can easily check each term in here certainly shows up here, and to show up with the same weight. And you see right away that this is an average, and this is also an average, so this is a true equation, right? It's a little bit combinatorics. It's elementary. Okay. So now with that, you see, and this helps you to reduce the gamma n, that's the goal, to gamma n minus 1. But how do we do that? Let me write down df at the nth level. And that's n times the integral 1 minus qn. You see, I have to keep the n now, that's important, right? And that I can write as 1 over n minus 1 sum k equals 1 to n, the integral n minus 1, identity minus qn minus 1 of k, applied to f. By the way, so it's pretty clear that this is true. You see, this n minus 1, I stick in here, why? Because I want to have here the time evolution for one particle less. That's what this is. Therefore, I put an n minus 1 here, right? If you remember, nqn minus the identity, change the sign of course. So I put an n minus 1 here. And of course, I have to divide again. This n disappears against that, okay? Good. So now what you would like to do is you would like to use the induction assumptions about the entropy production for this operator. But you see there's a little bit of a problem. The particle vk is missing, okay? So now what you say, ah, no, that's very easy. What I do now is I split the integral into an integral d mu of vk times the integral of sn minus 2 squared of n minus vk squared d sigma n minus 2, n minus 1. And this is the uniform probability measure on that sphere, okay? This is of course simply what's left over in order to integrate things to get the total sphere. Namely, the sphere as n minus 1 over 8 squared of n, right? Okay? Good. And then you have what in here? n minus 1, ah, 1 minus qn minus 1k f log f, right? That's what you have. That's the way I think about it. And what's the problem? The problem is that this function f, when they restrict it to that sphere, is not a probability distribution. This integral is not 1, okay? You need that because otherwise the entropy estimates don't make any sense, right? So therefore what you do is you start normalizing this, okay? And how do I normalize it? So normalize as n minus 2 squared of n minus vk squared, okay? So how do I normalize it? So what you have to do is you simply integrate f over this sphere and divide, right? But there's a better way of thinking about it. You see, what is this integral over this sphere? You can think of it in the following way. I take the axis vk. Just, I think of it as the north pole axis, okay? And then I take my function and average it over all possible rotation which fixed that axis, okay? That gives you exactly the normalization, right? Because what it is is you integrate over all variables except the vk. You keep that fixed, okay? And this kind of, this kind of operation of averaging your function over all rotations which keep the vk axis fixed produces your projection. And I call this projection pk. So let's write this down here. Tkf equals, I'm not going to write down the formula, I could, but this is not, it's not the way to think about it. Equals the average f over all rotations of the sphere fix that keep the v axis, the vk axis, okay? So that's, that's, that's the normalization. That's the way I think about the normalization of f on this sphere. I'll put it differently. Gk equals f divided by pkf is normalized. And it's easy to see pk is a self-adjoint projection. Yeah, why is it a projection? Because what does this mean? The resulting function is a function which tends, depends only on the vk variable, right? And when you apply it once more nothing happens, right? Because it's already invariant on all rotations, okay? So this is, this is a good way of thinking about the normalization, okay? Good. So now let's go on and set up the induction. So let me just copy this down. I'm going to write, let me write down df again. n integral 1 minus, what did I say here? This one equals 1 up to n. Now I have here n minus 1, sorry, identity minus, okay? Next I can put here a gk times pkf, right? Because that's the definition of gk. Now you see, this is a function which depends only on vk what does this guy do? Well this guy only acts on functions which do not depend on vk, which are constant in vk. You see, this operator doesn't see that function. It's a constant. On this sphere, when I fix the vk squared it's a constant. So therefore I can take this function and push it over to here. Can you follow me? I think it's clear, right? So here's your g. Now here I don't have the g so I put it in and correct for it. So I divide by pkf but then I have to add it back on and what do I get? Let me write it out. It's kind of stupid but I do it anyway. k log pkf, right? Now look, this operator acts on this whole on this function here only. I have to really think of it this way. So what you're doing, you're taking this function and you integrate it against pk log pkf, right? That's the way I should think about it. Now look on the sphere, sn minus 2 squared of n minus vk squared. This operator here is self-adjoint. So therefore I can push it back onto. So let me write this in this way as if you like the inner product of d mu vk. Here I have pkf log pkf commutator n minus 1 1 minus q n minus 1 k gk. And this is the inner product on the sphere sn minus 2 square root of n minus vk squared, right? That's the way I can look at this. Now you see this is a function which on this sphere is just a constant. This function is just a constant. So therefore when you push this guy over, it's self-adjoint, it gets zero. So this whole term, this correction term is just not there, all right? So let me clean up the board. So this is therefore zero, zero. That's important, okay? So we get a fairly clean formula. Let me erase this now. We get a fairly clean formula, namely that this d of f can be written in this fashion. Now we can apply induction. So this we don't need anymore. So induction hypothesis, pf 1 over n minus 1 sum k equals 1 up to n. Integral pkf, this gadget is greater equals what? Well, I think of it again, sorry, as d mu of vk. I split the integral up. pkf, here I have the integral over sn minus 2 square root of n minus vk squared. And what do we have in here? By the induction hypothesis, this is greater equals gamma n minus 1 times the entropy of gk. That's the induction hypothesis. So this is gamma n minus 1 gk log. And now I can put things back again. What do I get? I get the gamma n minus 1, 1 over n minus 1 sum k equals 1 to n. And what is here? Well, here I have the integral again over my whole sphere. I put these things together. What is pkf times gk? That's my f. And now I have here a log gk, which are right as f divided by pk. That's what you do. It's fairly elementary so far. You see, if I would not have this guy here, this would be fantastical already, right? But I have this pkf down here. And that's sort of the killer in this business. What we get? We get that the dissipation is greater equals. Now let's see what we get. First of all, you have an f log f here, which you sum up n times and divide by n minus 1. That's your gamma n minus 1 n over n minus 1 s of f. That's clear, right? That's only the first term. But now you have this pkf, right? What are these? And it's a minus. You get a gamma n minus 1. You get a 1 over n minus 1. And then you have a sum k equals 1 up to n integral f log, okay? Now notice pkf is a function which depends only on vk. So now we integrate here over the whole sphere. That's the same as saying that I can integrate here over all rotations which keep the vk axis fixed. So I can actually for free stick in here. Okay, so let's stare at this. Maybe we should do this next time. What would be a reasonable estimate for that? A reasonable estimate, when you think of probability distributions, rn is the integral of f log f for the whole sum, okay? That would be nice. That would be absolutely wonderful. Because you see, if this were the case, if this here is less or equals to integral f log f, then of course you would be in fantastic shape because here you get an n minus 1 divided by that's a 1. That's gamma n minus 1. So you would get that gamma n is greater or equals gamma n minus 1. And now you go down the chain and you end up with gamma 2. It would be greater or equals 2. And you would be happy. It turns out if this integral is an integral over rn, it's true. I will prove this to you next time. This integral, however, is an integral over the sphere and then it's wrong. However, what is true is less than twice f log f. And that's the bad news. And that has to do with the sphere. That's a new kind of young's inequality on the sphere which I'm going to try to explain next time. And it's sharp. You cannot improve it, these two. And you see when you accept these two here, what you get, you get that df is greater or equals gamma n minus 1 times n minus 2 divided by n minus 1 times s of f. We'll put it differently. We learn that gamma n, my n is greater or equals 2, n minus 2 over n minus 1, gamma n minus 1. And you see, now you take this recursion with this initial condition, which is solved. It is extremely easy. You get precisely Villani's theorem. By the way, I should say Villani didn't have this. So in other words, this implies gamma n greater or equals 2 over n minus 1. I mean, Cedric didn't have this constant 2n minus 1. It just had a constant divided by n. But this is what you get. So let's talk about this. These are interesting new topics. Let's talk about this next time. It's also related to Braskam Liebbing equality, which I will try to explain, and it will be very helpful for afterwards when we go on. Okay? Thank you for attention, and I'm sorry. I was a little bit over time.