 The last day of the conference people have been exhausted and it's been partying. Okay. Oh yeah, you had your celebration yesterday. Yeah, at lunch. So now you're officially a member again of Texas? Yeah. Since yesterday. When you go back? In August. Okay. They start teaching in August too. Now that's early. So, what do you think? Wait five minutes? Whatever you like. I mean, I'm flexible. I mean, I might then go over five minutes. Yeah. Is this still a problem? Yeah. Okay. Okay. So welcome to the last morning of the conference and to the last lecture of Michael Los. Thank you, Francesco. I wrote a few things down which we talked about last time. So the first model here is this thermostat model where you have a function of m variables. That's your initial condition. And you have a mass equation of the following type. You have this m particles interacting with each other. That's according to the Katz model. Totally standard. That's this piece. Lambda is just a constant. Okay. And then you have an interaction of these particles with a thermostat, right? Which is an infinite. It's an infinite reservoir, thermal reservoir. And the idea there is you draw randomly a particle of this infinite reservoir, which is in a thermal equilibrium. So, as a physicist, you would say you sample this reservoir, which is distributed according to a Maxwellian distribution with temperature beta. Beta is 1 over kT, so that's given. And then you scatter this particle with your particle of the system. And then this particle of the system changes somehow the probability distribution. But the particle which you drew out of the infinite reservoir you discard. Okay. So the infinite reservoir never changes. It just stays in equilibrium for the rest of its infinite life. Okay. So that's the idea. And then, what you can prove, this was a theorem by Bonnet, the Weideer, Natan and myself, is that when you take the relative entropy, so the gamma m is just the Gaussian function. I call a Gaussian function gamma m. And because I don't want to constantly write Gauss, right? And by the way, here, I've chosen beta to be 2 pi in order to simplify life. I don't have to carry around this normalization factor. Okay? It makes life easier. Okay. So now you look, I call it the thermal entropy, because what is it? It's relative to the thermal state. And that turns out to be the equilibrium state. In fact, we can show that the thermal entropy decays exponentially fast to zero, and mu rho is given by this expression here. And you notice it only depends on this mu, which is the interaction with the thermal state. Of course, that's to be expected, because the Katz model really doesn't have a very good entropy production. We have learned this from Wilhelm's theorem and for this counter example of Amit. Okay? Or this example of Amit. Enough. Okay. So this is the infinite thermal state. Now you say yourself, okay, let me put myself in a finite reservoir. So I have my systems coordinate V1 to Vm, and then I have the reservoir coordinate, that's Wm plus 1 to Wn plus m. Okay? So what do you do? You look again at the master equation. This time, of course, it's a master equation which acts on functions of two variables, V and W. And my initial condition is, again, the same initial conditions that I had here. The F0 is the same, but the reservoir I put in the thermal state. Okay? So that's what my initial condition is. And now you'll let this thing run. And lots of interesting things happen. The particles themselves among each other, they collide according to Katz. The reservoir particles also collide with each other. And then you have this interaction of the systems particles with the reservoir particles. That's this interaction here. And that's, again, just a collision. And note this collision rate has been chosen in such a way that when I take a particle of the system, that the rate of collision with any particle in the reservoir is constant, independent of n. The collision rate of a particle in the reservoir with any particle in the system is, however, a rate which is m divided by n times mu, which is very, very small when you think of n as very, very large. So we think of n as much, much larger than n. Okay? Good. So now there was a theorem of Ponetto, Tussonian, and Vaidyanathan and myself, which says the following. And that's what you would expect in a certain sense, namely that when you take the time evolution of this system here. This is this function. Now you multiply by a Gaussian, which is thermal. And now you take the time evolution of that system, which in some sense has the same initial condition as this one, right? Then the difference, this distance, which is called the Gabetta-Toskani-Wenberg distance, is bounded above by m divided by n, and this constant is roughly of order one. And this is uniformly in time. So in other words, when you choose n very large, this system here, which is very complicated, is described by that system, which is fairly simple. Uniformly in time, right? So these two things keep close together for ever and ever and ever. Notice, there's a funny thing about this. What's the thermal state here? The equilibrium state. The equilibrium state is a state, of course, which has to be rotation invariant. After all, what you do? You have here averages of rotation, averages of rotation, and averages of rotations. So when you start with this state, all you have to do is you average this state of all rotations in Rn plus m, in this huge space. What you get is a function, which in general is not a Gaussian. Okay? Nevertheless, this inequality is still true because Gaussians in very large dimensions and spherically symmetric functions somehow are very, very close. You can estimate this. So the spherical average of this function, and when you replace this by a Gaussian, these things are very, very close in this distance. Okay? So that's somehow the description of comparison. Now here's the question. So we know that in the thermal state, in this system here, the entropy decays exponentially fast. So far we have never had any clue what could be the entropy for this system. So what do I do? You take your solution of this large mass equation, which is terribly complicated. Now, you agree. You are not interested in the particles which are in the reservoir. That just stays there. They move around. So what you do is you take the marginal of your distribution with respect to w. That gives you a function. I call it again f of vt. It's of course not the same solution as this one. It's not. But let me call it f of vt, otherwise I'm running out of letters, okay? So what would you expect? You would expect that the entropy should also decay. Well, it shouldn't decay to zero. Why not? It's not the Gaussian. You see, this entropy is always with respect to the Gaussian. But still, what you would expect is it should decay relatively fast and to become relatively fast small, or small, relatively fast, I should say, right? Okay? That would be reasonable. Why? Because you say this system seems to be in this metric well-approximated by that system. This system has fast entropy decay. Yes, this entropy should also decay, maybe not to zero, but it should get small fast, okay? So, and here's a theorem. And this theorem has not been published yet, but it's about to be written up here. Bonetto, Geisinger, Reed and myself. So Geisinger and Reed are two German graduate students. They visited Georgia Tech and during that time we cooked up the following theorem. S, thermal of F dot and T is less than equals. Okay? So remember, what do I do? I take the solution, I integrate against D w and I plug it in here and I divide by gamma m, which is the Gaussian in m variables. Okay? So remember, the idea is that should become small fast. Here's the theorem. This is less than equals m divided by n plus m. Well, that's already good news, right? Because when n is very, very large compared to m, this is tiny. What's left? Plus n divided by n plus m. Well, that's really bad news, but what you have is e to the minus mu rho times, I have just to get the exponent right, times the entropy of your initial condition. That's the result. So you see, it's kind of interesting, right? The mu rho is precisely this number here. So what is this telling you? It tells you when you take the, and by the way, I mean, this is a real honest constant. This is really a one here and that's it. I mean, that's the computation, okay? So now you see, when you let n do infinity, what happens? This goes to zero. This goes to one. This goes to this exponent here, which is one. So that's exactly, you replicate precisely that result. Now, we know, Vaidya Natanranjini Vaidya Tan is a very smart lady. She really figured out that this was the best possible power here. So therefore, and that's non-trivial, huh? She finished. And therefore, you know that this kind of being proved is right. So in a way, it's kind of surprising that you can get for such a complicated system, sharp rates, right? That's kind of surprising. And now for the remaining time, I would like to explain to you how you might go improve something like that, okay? And things can get a little bit complicated. So what I'm going to do in my talk is I give you first steps how to approach this. And when the formulas get large, I will use slides to explain the rest of that to you, okay? All right. So now let's compute. So you do something outright crazy. What you're going to do is you take this L. Remember, you have to compute E to the LT. And the way you do it is you just expand it in terms of a power series, right? Brute force. This is what physicists do, right? And then you think afterwards what you can do with it. So what is it? So the E, so let me set up the L once more. Notice when I here, here I always subtract the identity, right? And I can actually, when I wipe out the identity here, when I wipe it out, then of course I have to correct things again. And by the way, the N here in this theorem, the N just has to be bigger than M. It doesn't have to be large. It's still true, okay? So then what you have to do is you have to subtract a certain constant lambda. And what is this lambda? It's, well, lambda S divided by M minus 1. How many terms do you have in here? M choose 2, right? So that's just lambda S over 2. What is it? M plus lambda R, N over 2 plus. And how many terms do you have? Well, here you have N, which kills this, so you have mu times N. So this is my lambda, right? And then it can go and expand the power, this thing in a power series. And let me be very generous, because you see that's the nice thing about the video. I can now write the formula, afterwards it can wipe things out and fill things in, because it's recorded, right? That's kind of nice. So I don't have to repeat the formula constantly. Sum K equals 0 to infinity, lambda T to the K divided by K factorial. And then you have a sum over lambda alpha 1. I explain what these things are. Alpha 1, alpha K. And then you have R alpha 1. Okay? So what did I do? I looked at my L. Now let me sort of expose it a little bit better. Forget about this. I'll wipe this out, but we don't need this item. So here's my L. So what you do is, I have an E to the minus lambda T, which I forgot. And then I have a lambda T to the K. What do I do? I pull out a lambda in front of the L. So I have lambda S divided by M minus 1 times lambda. Sorry. Yeah, that's right. Then I have a lambda R divided by N minus 1 times lambda. And I have a mu divided by N lambda, right? Let's just pull this out. Now what's nice about these numbers? When I sum these numbers, right, here, here and here, I get 1. These are convex weights. Okay? That's, I mean, after all, what do you do? You have M choose 2 times this. M choose 2 times that. N times N times that. And now I divide by this total sum. These are convex weights. Okay? And these are these weights in here. And what do I do? I have to sum here over all possible pairs, which show up in the collisions. Okay? Right? I mean, there could be a collision which corresponds to these guys, collisions which corresponds to those and those. And they come up in all possible combinations. And that's what my sum is. Okay? So it's a little bit, well, a little bit. It's just terrible at the moment, right? Okay? Good. And now what you do is you apply it to a function. And then you have to ask yourself what these R alphas are doing. Well, what are they doing? So let me make a certain simple transformation first. It's always convenient to write things in the following way. I define this H0 in this fashion. Or H, sorry, just H. Why am I doing this? Look, when I go to these variables, that means that my function f of V, yeah, let's make H0. Let me write it this way. f0 of V is then just a Gaussian times m of V times H0. Okay? Now let's look at our initial condition. Our initial condition was f0 of V and W is this function times the Gaussian n times H0 of V. Agree? That's what it is. Right? Because my initial condition here was given in this word. So now what do you realize? You realize that this product here is a Gaussian m plus n. So this is a radial function. A radial function, the cuts model L, when you apply L to a radial function, well, you don't see it. It's invariant. It doesn't move, right? Because after what is cuts, cuts just averages over rotation. Rotational invariant functions don't move. Okay? So what does this mean? It means that when I stick in here my f0 of V and W, I can replace this f0 by H0 times of V. Agree? Times this Gaussian, but this Gaussian I can push through and pull it out here. Agree? That's what you get. Now, what am I supposed to do now? I'm supposed to do... So let me make sure that I get this thing straight. What is our little f of Vt? Well, it's the integral of, what did I say? f of V, W and T, D, W. Okay? But there is one Gaussian, the gamma m, which is not affected by W. It just comes out. You can pull it out here. E, sorry. And then you have the integral of D to the L, T applied to what? Well, it's applied to this function H0, which gives you a function of V and W, and then you integrate against the Gaussian. Okay? That's what you get. You see, this Gaussian, gamma n, is not affected anymore by the evolution, right? Because this whole thing came in front of the evolution. It's out here. All right. And now you see this f of Vt is written as a gamma m of V times a function of T. So I will call this naturally H of Vt. Agreed? This makes life simple. Let's look at the entropy. How does the entropy look like? The entropy is, where did I write it here? I put in f of Vt, the logarithm of f of Vt divided by gamma m. You see this gamma n drops out. So therefore, you know that the entropy you're worrying about is the entropy as thermal, which is equals to the integral of H log H gamma m. That's it, right? Good. So I think we have simplified things a little bit, right? No ratios anymore. The Gaussians come in a nice place. And in particular, we will see that this Gaussian is absolutely lovely. We will enjoy this hugely afterwards. So let me now write down what we have to work out. And also, let me introduce some silly notation. And I would like to do it this way. Look, this H0 is really only a function of V. And that makes life confusing because when you apply the e to the Lt, you get actually a function of V and W, right? So what you really have to think of this H of V as a function of V as a function in V and W. So I'm going to write this H0 in the following way, just as a reminder, H0 of V. I'll write this as H0 composed with P of V and W. And what is the P? The P is just a projection. I know it's a little bit silly, but I do this only to remind ourselves that these are really functions in the whole space and this L really acts on everything, okay? We have to keep this in mind. Okay, good. All right, so with all these preparations, how does this thing look like here now? I can just continue here. Our revolution is this one. H0 composed with P, right? This guy just acts on that one. And over here, I just put H0. So now let's go and start writing out what these things are. And again, it's getting even worse, right? This is already pretty bad. Now it's getting worse. This is the integral of d theta 1, rho theta 1, integral d theta k, rho theta k, right? And then what you have to do is you have to stick in here the rotations. And what you have is the product, and I'm going to write it this way, r, lowercase r alpha j, j runs from 1 up to k of theta k, j. I mean, can you read this? No, you can't, huh? Let me write this better. So this is of product j equals 1 to k. r alpha j of theta j applied to the function v, to the vector v and w, okay? You agree? That's what it is. And you see, it looks pretty lousy, right? But notice this is a rotation, okay? And this rotation, what does it do? It acts on all these vectors. And then afterwards, what you do, you take a projection, okay? So that's what in some sense you have to compute, and that's the stuff which you have to plug in and work something out, okay? So, now let me point out a few things, which is kind of interesting. When you look at this, this is a rotation. Now, it's a good idea to write this rotation as ak, bk, ck and dk. What is this ak? Ak is an m by m part of the matrix. So this is the m part, an n part, here's an m part, and here's an n part, okay? So this is an m by n, this is n by m, and this is n by n. But now notice, and this ak, depending on the terrible ways on these alphas and the thetas and everything, okay? Now, what do you know? You know that this is a rotation matrix, right? Because this came as a product of rotations, right? These are rotations. Remember what are these? These are rotations in the plane specified by this pair alpha j, right? These are these rotations. And you just stack them up. So what you notice is that ak, ak transposed plus bk, bk transposed must be the identity in m dimensions. I mean everybody knows that, right? That's essentially the definition of a rotation and about the other terms actually don't care. I don't just ignore it, okay? Good. Now notice, by the way, the b is not a simple matrix, right? Now what do you do next? Why do I do this decomposition? I know I do this composition because what does the p do? The p takes this vector, right? I multiply this matrix by this vector. And what the p does is it just picks out the top components. All right? So therefore, what I can do now is, and this is the point about the p, I can replace this mass by what? By h of ak. And remember, the ak depends on everything. Let me indicate this in this way. b plus bk alpha theta w. Agreed? Which just is true. So let me just check. Now what do I do next? Next I do the following. I take this whole mass, I mean you agree so far it looks terrible. But what I do is I integrate this dwe to the minus pi w squared. Remember? I mean these are the joys of the Gaussian. All right? Now you see all these numbers here, this is now gone. Remember? dh, everything divided out. What do I do? I multiply this, I mean you see these are just constants from the point of w. All I do is I take this and multiply this by e to the minus pi w squared. And this is an integral over rn. Okay? And now we can start simplifying. Look, what is this bk? This bk is an m by n matrix. So really what I'm seeing here is just a vector which has m components. All right? But you see the w runs over rn. And now you just think a little bit. You can actually factor this matrix b as 1 minus a transposed a to the... No, sorry, a transposed to the power of 1 half times a rotation, let's call it beta. And this is a rotation. Right? I mean, sorry, it's an isometry. And what kind of an isometry is it? It maps the whole space rn into the subspace rm. So therefore, now you notice when you split your coordinates into one in this subspace into one perpendicular to it, these variables here which are left over, they don't see this expression, the Gaussian is normalized, you just integrate. So what you get by this little computation is, and you can just wipe, this is a nice thing, you get one identity minus ak, ak transposed to the power of 1 half, w e to the minus pi w squared. And here the integral is now over rm. And now I think some of you guys might actually sort of see what's going on. What does this remind you of? For the experts. Let's just do this. What's the point about the Ornstein-Ulenbeck semi-group? Right? And what you have here is precisely an Ornstein-Ulenbeck semi-group, but what is your problem? Your problem is that these are matrices. This is your problem. These are matrices. Now, what's the point about the Ornstein-Ulenbeck semi-group? There is a fantastic theorem which goes back to Edward Nelson, and let me write this Nelson's theorem here. So we don't need this anymore. So nt is this gadget here, integral h of e to the minus tv plus e1 minus e to the minus 2tw e to the minus pi w squared dw. Theorem, Nelson. And I cannot really, I cannot pin down the date. I don't know. In those days things didn't really, by the way, I mean some people had remarks about Nelson's theorem, published their remarks before Nelson published his theorem, but it's very well known. It's due to Nelson. When you take nt h in the q-norm, and what do I mean by the q-norm? It's always the q-norm with respect to the Gaussian measure. That's lesser equals hp-norm. Well, so far not too interesting, but what it says is, in particular, it says that the p minus 1 is equals e to the minus 2tq minus 1. You see, that's interesting. And that's a hard theorem, actually. Why? Because what it says is that the q here can be actually larger than p. Okay? It cannot be too large, because once this is larger than this p minus 1, this upgrade is not even bounded. But when this relation is satisfied, then you know you have this estimate that the constant 1 is sharp. Now what you do with that theorem is the following. You have a little corollary. And what you do is the following. You have s of nt h is less than or equals e to the minus 2t, the integral h log h e to the minus pi v squared dv. I just wrote out the entropy, right? To remind you again that this is the entropy s of h. I write it this way, right? So this entropy, so when you stick in the Nelson's kernel, you get this estimate. And then you get some remainder. And this remainder looks like this. Integral h log integral h. And of course the integral is always with respect to the Gauss measure. Now you notice when the h is normalized, this is zero. You don't worry about it, okay? Okay? So that's what you have. The proof is disgustingly easy once you accept this theorem. Why? Because you see when you take the L1 norm, sorry, I forgot to put an h here, when you take the L1, the integral of that, you just get the integral of h. So it's actually, you have equality when q is equals p is equals one. So then what you do is, you subtract the L1 norms from both sides, you divide by q minus one, q minus one, and you take the limit q to one, and you forget, don't forget this relation here, out pops this inequality. I'm not gonna get into this, all right? So this is the hypercontractive estimate. It's actually a theorem out of quantum field theory, believe it or not. That's what instigated this whole hypercontractivity stuff, right? Because you see, you can generalize these theorems, do infinite dimensions basically without giving up on the constants, right? One stays at one, right? You can multiply it as many times as you like. Okay. So now how do we use this? What I would like to do now is I would like to keep this formula and show you how one can prove our theorem at least in the case m equals one and n, of course, bigger than one. Right? It keeps life simple. So in this case, in which situation are we? Well, in this case, we're in the situation where the 8k is a one by one matrix. Right? Good? And now you see also that when you take the entropy of that guy, now you see all these weights are convex combinations. You see, the sum of this against that is one. The sum of all these guys is also one. These integrals also integrate to one. So by convexity of the entropy, I can simply go and stick in the entropy here of that gadget. Now in one dimension, what is my situation? In one dimension, I know by Nelson, remember the h is now because it's only a function of one variable, it's normalized in the Gauss measure. So by Nelson's theorem, by the way, it doesn't matter whether this pre-fact or here, I choose it to be positive or not. It's absolutely clear that these numbers are less than one because I have this relation. So they're between zero and one. So I can apply Nelson's theorem and I get that this whole mess in the case when m is equals one is less or equals d theta k rho of theta k and then I have a k of alpha and theta times the entropy of my initial condition. Agreed? That's what you have. And now at this moment, there's absolutely no reason to rejoice, right? Because you have absolutely no clue for the moment what the sum could possibly be. Now, here's a little idea. It turns out you can actually compute the sum explicitly. Okay? And so how do you do that? Let me just... Yeah. That's right. Okay. So we have to compute the sum. Well, here's the idea. You choose h0 of v to bv squared. Okay? So now you have two things. The first one is the formula. Maybe I'll write this formula down somewhere and stash it away. So this was just integral e to the minus lambda t sum lambda t to the k over k factorial. k goes from 0 to infinity. Then you have the sum over all pairs. Then you have the integral over the rows. And finally, you have the h of ak. I'm going to write it this way. v plus 1 minus ak k t 1 half w e to the minus... No, that's just what it is. No, I can't actually do that. So this is this formula. So now what are we going to do? We're going to choose the h0 to bv squared and plug it in our formula. So then what you get? When you choose the h to bv squared, so this is h0, sorry, h0 is v squared, what you get? Well, you get the square of that gadget. So in our case, when these things are one-dimensional, what you get? You get the ak squared because that's this guy. v squared. The integral against the Gaussian is 1. And here, what you get? You get the w squared 1 minus a squared. And then you get the w squared against the Gaussian. It's a one-dimensional integral. That's 1 over 2 pi. So now you get this sum. And you see something. You see that this sum here, together with this ak squared, is precisely what you want. So if you have a chance to compute this thing in a different way, then you're home free, right? And the idea is actually quite simple. What you do is the following. d v squared e to the minus pi w squared d w. That's precisely this sum. This whole mess is precisely that. And now you differentiate. And what you get is the integral e to the l t, and you have an l in front applied to the v squared e to the minus pi. And now I think I propose to you as an exercise to compute what this is. When you do this computation more or less carefully, what you find out is that this, let's call this function u of t. It depends, of course, on v, right? u of zero. And of v is what? Well, when the t is equal to zero, it just gets v squared integrated against the Gaussian w. That's v squared. When you differentiate this out, it's a completely elementary computation. You'll learn that d u d t satisfies a differential equation. And this differential equation looks like the following mu rho one over two pi plus one over n v squared minus mu rho n plus one over n of... That's what you get. I just recommend you to do this computation. It's extremely simple. Why? Because all you have to do is you have to apply the L to this v squared. There are a bunch of integrations in theta. And once you do this computation, out pops this formula. Now, this is a linear ordinary differential equation so that everybody knows how to solve it and what do we get? We get that u is equals to one over two pi n right this way, n plus one one minus e to the minus mu rho n over n plus one over n. That's perfectly reasonable because that's your rate of decay, right? t plus... And then you get one over n plus one plus n over n plus one e to the minus mu rho n plus one over n t v squared. That's what you get. So therefore you know that this piece here must be this sum. Why? Because this is precisely what you get here. And you notice this is precisely what I claimed in this theorem. Okay? So this works pretty well. And so now you say, well, now I can do this one of course. It's absolutely obvious. Now you do it for n equals two and you run into a wall. You see, I gave you a little presentation of Brass-Kambleep theorems and so you're pretty sure now that this Brass-Kambleep theorems must be used somewhere in these arguments, right? And that's the reason that's getting a little bit complicated so I decided to put up some slides about that. Okay? So remember, this is the guy up here which corresponds to Nelson's estimate, right? I mean, to which we have to apply Nelson's estimate. But you see, this is very complicated, right? The ak is just an m by n matrix. Now what you're going to do is you're going to do what is called make an orthogonal single ovality composition. In other words, you see, I might have some zero eigenvalues in the single ovality, some singular values which are zero, I keep them, okay? So the u is orthogonal, and gamma is a matrix which has non-negative elements in the diagonal. Here they are. Okay? And remember, all these are functions of alpha and theta of all these patterns. Good. Here's your theorem. When you apply Nelson's estimate, this one here, well, I can't see it. Here. Repeatedly. You have to repeat it. Why? Now you see, when you iterate this estimate, how do you do it? Well, you see this function has the ak, has a u, gamma, v transposed. Now the v transposed just hangs in there with the v. If you replace variables when you compute the entropy in terms of v, this is just an orthogonal transformation. Everything is rotation invariant. This v drops out. What does not drop out is the u, and the u stick together with the h. Okay? So how does it look? I get this h, u here, and I take marginals. Why do I have to take marginals? Because you see, when I apply, for example, Nelson's theorem to a function of two variables where I have different eigenvalues here, then what do I get? I cannot assume that the function, the marginals, are normalized. I cannot assume that. So what I have to do is I have to put the eigenvalues in terms, and that gives you this huge mess. So what is it? It's the sum over all subsets of the sets 1 to m. When the index is in the complement, you put the gamma i squared, one of these eigenvalues in the single-valid composition, and when it's in the set, you put the 1 minus gamma j squared. Here, you have to take the marginal in the variables which correspond to sigma. You write this, and this is a complicated function. You multiply by the log, you integrate it against the u, but you have to integrate it now in the remaining variables, which is an r sigma complement. So this estimate you can get. This is just a repeated application of Nelson's theorem, but you have to be a little bit careful. Here, I've written out what the h is, h of u kb. And the h sigma here is written out as this integral. You notice something. Here, just as an important part. Look, what is this? This is actually equals to that. You see what the difference is. Here I integrate over r to sigma complement. Here I integrate over all of rm. Now, what do I do? This is the trick. You see, I can simply complement these variables here to variables in all of rm. It's just a 1, right? When you integrate. And then I can just remove the sigma here, right? Because after all, these functions only depend on the sigma complement variables. And then finally, I can move the u over and I can onto this function here and that's what I get. This is this complicated expression, okay? I mean, I understand that you're not going to follow me really, but you want to get sort of the gist of it, okay? All right, next. How do you... So this was Nelson's Theorem. And now what is it? This is the right-hand side of Nelson's Theorem. Right as sum over all of rm, this is the integrative of all the rows. It's a huge mess, right? And what I want to do is that this whole mess is less or equals the entropy. That's what I really would like to have. Maybe with a factor. Okay? That's somehow the goal, you agree? So how do I do that? Well, here comes brass complete, right? I just written it down again. You have your Hilbert spaces, K of them, they have dimensions T i. You have these maps going from rm to the h i's. And you know that this B i, B i transposes the identity on these Hilbert spaces. Now, what you also need are these numbers C i that when you look at this matrix you add up to the identity, right? And what does brass complete tell you? This huge mess here is less or equals that mess. This is a nice mess. This is a terrible mess. Good. So now here's a consequence. I remember I told you about the Legendre transform of the entropy. If you apply the Legendre transform of the entropy, you get to this theorem here. And you notice here you have this h v of log f i, log of the f i's and then you sum multiplied by C i. And that's, it's that theorem. By the way, this is a theorem which I learned from Eric Carlin and Cordero Arauskin. I mean this whole sort of circle of ideas. So now what we're going to do, we're going to apply that theorem to this mess. And you see you have to abstract a little bit. So here's a sum. For all practical purposes is also sum. You agree? I mean, what you can do is you can approximate such integrals always by discrete sums. I wrote D row, I hope you don't mind. It goes a little bit faster. So you have this stuff. Then you have this remaining thing which we got out of Nelson's theorem. And now, you see, this has precisely the structure which we want. Namely, why? Because the h of v is here. What is f i of b i of v? Well, this turns out to be this logarithm. Right? So what do I do? I make this correspondence. Sorry, this was wrong. Here should be the logarithm of h. So this is going to be my correspondence with the logarithm here. This integral is one. Why? Because this thing, I can simply, this is a marginal. I can complement this to an integral and it goes out because it's a rotation. The Hilbert spaces correspond to these sub-spaces where are they? Yeah. In which these guys project, the b i's correspond to these operators here. That's where they are. And now you have this distinctive pleasure to figure out whether something like this here is true. Notice this is the b Let's go back to Brazcom Blib. Right? I have to check this relation. This relation is trivial to check. This relation can be checked. So that's this one. I have to sum over all sigmas. I have to integrate over all rows. Okay? That's what I have to do. And what it is, it turns out that this mess is equals the identity times a number. And what is this number? Right? Now, why is this a miracle? Because now, let me just argue how you can do that. If you take this integral here, then you see these guys here have nothing to do with the sum on sigmas. So you can write it this way. And now you notice that when you, for example, look at this matrix here. Well, this matrix is a diagonal matrix which has only zeros and ones. So let's pick a place where there's a one here. So that means, let's take the place one, one. Okay? This matrix of them. Well, either it's zero or it's a one there. If it's a one there, that means I have to sum over all sigmas which do not contain one. And then you can actually sum that stuff and what you get is precisely gamma k squared. Now, what is this? By the single ovality composition, this is nothing but a k, a k transposed. And now you realize again that this a k, a k transposed where does it came from? It came from this product of rotations. That stuff you can compute. And it turns out this is precisely this number c k m. Okay? So what does this mean? This means that in this huge mess here when you put here the entropy you can replace this whole sum by e to the minus lambda t sum lambda t to the k over k factorial and then you have to c m k. That's what your estimate is times the entropy of your initial condition. Well, every child knows how to do it since kindergarten what this is precisely the theorem which I told you. Okay? So I know this is a little bit brutal and you can imagine it took us a while to get there. But I really urge I mean, I think what I really like to convey is you see, when you have a complicated system it doesn't mean that you can compute something pretty explicitly. It's really you get rates and you know what's going on. Right? Now what you can of course complain about is you can say wait a moment this is not really a good theorem, why? Because what you should be doing is you should take the entropy relative to the equilibrium state. Now that opens up a whole totally new kind of worms. I think it should be able, one should be able to do it but you have to invent a whole slew of new hypercontractive estimates of atomic groups. And then you have to also produce a version of brass cum bleep on the sphere. We have some experience with that. But this would be a totally new project and at the moment I'm too exhausted I won't be able to do it. All right? Good. So I think you got a little bit of the message how this works and what I'm going to give you now for the remaining few minutes I'm going to give you a few results here. Some references, okay? So the fundamental paper about all of this is of course the one due to Mark Katz from 1956 where he also mentioned what we call Katz conjecture which then was proved by Elise Jean-Ress. The computation of the gap was done in this paper here. I wrote this down because it's just some lecture notes very short and it's sort of in a non-fancy way. Just the basics. And that's sometimes very nice to read at the beginning. Now this whole thing was elaborated in the paper in Akta a long paper where we also compute the gap or at least prove the Katz conjecture for momentum preserving collisions. A nice paper is David Maslens who really computes much more about the Katz master equation. And many of the things I talked about was of course in this beautiful paper of Cedric and the title is interesting Church and Yannis conjecture sometimes true and always almost true. Well in a way he's right. In us in this paper is where he actually shows this lower bound on the entropy production and Amid was the one then Amid Enaf who provided the upper bound. Now you see what I in some sense told you is kind of a preparation to read some other papers in here. So these are the really fundamental papers in this business. Namely by Stefan and Claymore Moore. So this is a nice paper at the beginning you get a first outline of what this whole thing is and if you then got up this is the appetizer and then when you're really hungry you can go to Inventionist for the next 147 pages it's worth doing it actually. It's a very nice paper. This is very fundamental. I would say this is really seminal work. So now about the thermostats we started out with Ranjini in this paper in Journal of Statistical Physics where we proved this exponential decay of the entropy. The connection between the infinite thermostat and the finite reservoir was actually proved in this paper. This is the student, the graduate student of mine. And if you're interested in this Gabetta Dosgani Wenbeck metric here this is the right paper to look at. It's actually a very interesting paper because we often look things, what is it, L2, LP no, there are many other interesting metrics and they really provide a slew of those. Braskam Lieb Here is the classic paper of Braskam and Elliott Lieb with all the constants and Young's inequality. It's converse in its generalization of more than three functions. That started the whole story. And then Elliott produced a huge paper of Gaussian kernels of only Gaussian maximizers. Now that's actually worth reading again, why? Because for him e to the Ixy is a Gaussian kernel. So he gets all the sharp constants in the house of Young inequality and all these kinds of just all comes out of this paper. This is seminal work. And then the stuff which I told you I learned also lots of it from Keith Ball and that this is beautiful because he uses Braskam Lieb inequalities to prove theorems about sections of cubes and related problems. It has a huge impact on convex geometry interestingly. And you can also look at this paper these two are very very nice papers. He writes extremely well. That's really worth reading. And then the one so this by the way I have to say Keith did it in the rank one case where these matrices be I or rank one matrices. Now Frank Barthe in another terrific paper in Inventions he actually proved all this kind of stuff for the rank, the arbitrary rank case. And then Eric, Elliot and I we discovered this correlation inequalities on the sphere and then we realized that heat kernels can do a lot of good to prove all these inequalities and we did this in this paper also for the rank one case in Braskam Lieb and that was later done in the general rank case by Bennett, Carberry, Christen, Terry Tau. I also would like to mention Eric Carlin and the paper of Eric Carlin Dario-Coderra-Rauskin, namely sub-additive entropy in relation to Braskam Lieb type inequalities. This is where many of these things have told you show up. And for historic reasons the very first papers were by Loomis and Whitney this was at Braskam Lieb type inequality but of course much much more elementary right. The one which I showed you what you use for the Sobler's inequality and there is an analogous entropy inequality which was proved by hand. And you see this goes way back. Hypercontractive estimates Edward Nelson had this famous paper, The Free Markov Field that's a paper about quantum field theory where he actually has this inequality and the guy actually sort of realized that this inequality which I told you this one here is actually closely related you can prove it with the sharp logarithmic Sobler's inequality. So in other words, Nelson's theorem implies the sharp logarithmic Sobler's inequality and the converse and of course about logarithmic Sobler's inequality there's lots of stuff I just put this here for historic reasons of course these papers of Lenny Groves. And with that I would like to thank you for your attention in particular would like to thank Francesco for organizing such terrific conference and I would also like to thank the supporting staff for this fantastic help in running this place. Thank you.