 Okay, so Welcome back to everybody. So the next speaker will be Michael Loss And we're going to talk about decay of entropy and the cuts master question. Thank you very much Let me first thank Gido and also Francesco is not here for organizing this wonderful event This is really a wonderful institution And of course, I also would like to thank for inviting me Otherwise it would not have been able to come to this wonderful institution and Now I would like to talk a little bit about the few things which I know about this cuts master equation and most of these things are learned from Eric Carlin and Maria Cavallo who were part of the Scientific organizers and who cannot unfortunately be here So many of the things which all of the stuff which I'm going to tell you I somehow learned from these two people So what so the cuts master equation? so notice I Pointed out here simp I put simple in italics. I want to emphasize it. It's really an extremely simple model and What is supposed to describe it? It describes collisions of n monatomic gas particles and Of course, you know, this is a very very difficult problem. So what you do is you cut it down Do something which is tractable? So we gonna that the worst that the toughest assumption you're going to make is they'd be talking about a spatially homogeneous situation in other words, I'm not talking about the variable density the density is constant and And I don't see the position of the gas particles only see the velocities Moreover, we will talk about the mean field description. So what is mean field mean field means? That a gas particle collides with every other part of gas particle more or less at the same rate, right? It's not that I would distinguish that so I say nearest neighbor gas particles gas particles which are close by Collide, but of course this does make any sense after all we talk about a spatially homogeneous situation Of course, we can also talk about the system Particles moving three dimensions, but the notations would get extremely clumsy and cumbersome So we're going to do with cuts assume that the particles only move in one dimension and As soon as you do that Then you have an interesting conundrum a little bit Because you would like to conserve energy and you would like to conserve momentum But when you have particles colliding in one dimension, you cannot really mean Yes, so what does it mean when the energies conserved and the momentum is conserved either the particles go through each other in other words They keep the velocities or they exchange them and you know not much interesting stuff can go on Okay So this is so what we then decide is we will conserve energy and we do not care about the momentum Good Now the state of the system is therefore at any time t is therefore specified by a probability distribution f Which depends on this coordinates of these velocities of these particles v1 up to vn and of course in time t and For net notational convenience. I will write this vector here abbreviated as v with an arrow on top Yeah So now this these are sort of the basic assumptions, which we're gonna make now. What is the model? Well, I'm writing here something down So what do we do? We we take Randomly a pair ij and Randomly we mean I mean uniformly in other words the probability with a big particle i and j is always Proportional to one divided by and choose to and then what do we do these two particles with label i and j? They will collide and how do they collide? Well, what we do is we pick randomly a scattering angle and The probability distribution here is a row of theaters a row of theta d theta is the probability with which we pick the scattering angle and Then we do something on the surface of it quite silly Well, we update the velocities name it These are the velocities before the scattering these are the velocities after the scattering and you see as you notice is this is just a rotation right Okay, it's in other words. It's a random rotation and why do we do that? Of course because this makes evident that the kinetic energy is preserved The sum of the vi star square plus vj star squared is the sum of the vi squared plus vj squared right So that's what we do again. We pick a pair randomly ij distinct indices These two particles ij will collide we pick randomly a scattering angle with this probability distribution We update the velocities by a rotation and if you like you notice by the way, I haven't said anything about time yet I will talk about that later So in this way way by repeating this in the random fashion what you get is a random walk. We call it the cat's walk Okay, so Let's start deriving some formulas about this So here as I said the energy is conserved, right? This is my total energy. I call it e and In probability all you can do in some sense is compute expectation values So let's assume that phi. Yeah, by the way, I should say you see the v squared is equals e So let's take a function phi which goes from this sphere in n-minus in end Sorry, it's a sphere in n dimension, but it's an n-minus one-dimensional sphere With radius squared of e so a function on the sphere into the reals Okay, so now what do we mean? By an expectation value. So here what are we interested in we're interested in the following Suppose, you know that the particles have velocity V and now to go on the undergo a collision So in other words, I'm interested That given that at the jth collision, that's the jth collision the velocity is given by V What is the expectation value of phi of the J plus first collision? right and What is that that's given by this operator qn applied to phi and what is that? Well, I define our ij which means what I average over the rotation angle That's scattering angle That's what I do here Then I average over all pairs and you divide by and choose to I mean that's the average overall pairs, okay? So that's what this qn phi is good So With that we can actually produce what is called a Markov transition operator again Let's take to be phi to be an arbitrary real test function on the sphere And what I would like to do is I would like to take the probability Distribution on after one collision you see what I would like to cook up is a process Which is an evolution on probability distributions? So let's take f1 to be the probability distribution of these particles After the first collision say I don't know what it is, right? So what do I do? I? Can say well this is certainly the expectation value given that before the collision the velocity is equals to V Averaged over the probability distribution which I get was initially given before the collisions So that's this thing here Now we know what this is. I just defined it before that's just qn applied to phi integrated against f0 v and Now this is a linear operator this linear operator. I can move over by taking the adjoint and You see this is true for for any real test function So therefore I conclude that the f1 is even given qn star f0, right? So this is the way how the probability distributions progress Notice here. I said here qn star. Let's go back to this q What was it? This was this gadget here now notice If I reverse the rotation make the inverse rotation, I will just replace theta by minus theta, right? and When I assume that row of theta is a symmetric function for example, you have seen this in Law on David Let's talk that the scattering cross-section was always a symmetric function of the scattering angle and in some sense That's what this theta is of course much more primitive So if you assume that this row of theta is a symmetric function, then you see easily that this qn is a self-adjoint operator Right because when you do the adjoint what you have to do you have to undo the rotations in other words You have to pass to the inverses, but since the distribution is the same It's self-adjoint. Okay, so that's kind of useful So you see this operator is going to be the main object of study and the way you should think about it is this You take in Rn planes Right two-dimensional planes What you do is you average over rotations in these planes You average then over all these planes That's your operator Okay Now this operator can actually it's actually complicated despite the fact that it looks so simple so Well, it's very easy to see about easy. I mean, of course, it's always easy That this operator has actually discrete spectrum How do you see that? Well, you know that this operator is generated by rotations, right? So now you know another operator which commutes with rotations. That's still a plush So therefore the Laplacian operator commutes on the sphere the lapel Petrami commutes with that operator So therefore, you know that the space of spherical harmonics is left Invariant by this operator and since these are finite dimensional spaces, you know right away that you in each of these Invariant subspaces of finite dimension you have discrete spectrum, right? That's easy So you say right away if you want to understand this operator, it's easy All you have to do is screw around with spherical harmonics Right turns out to be well spherical harmonics are huge, right? There are lots of them and these spaces of spherical harmonics of certain fixed degree can get very very large So this is not an entirely trivial operator to analyze Anyway, let's go on again so this is This this microscopic reversibility, that's what it's called this guarantees that this operator is self-adjoint Okay, so Therefore, what do we do? Well, if you want to compute the probability distribution of the cake relations, what you do you start out with with your originally probability distribution and you apply this operator k times What it is? Okay. Good. So now let's talk about time So far haven't done anything so we assume That the times are distributed according to a personal process and we assume that it's that that Ti is the first collision time for particle i in other words what it means is that before The probability that before time t there was no collision It case like e to the minus gamma t That's an assumption and gamma is just a constant to make these things sort of dimensionless It's in fact what it will be is it's the mean collision time between two particles Now if you have and now you see when you have the particle i this particle i can Can collide with any one of the other n minus one particles and What we therefore do is we look at us look at the probability that the minimum of this n collision times for each Particles bigger than t and we assume that these collisions are independent So therefore it's an exercise to show that this Probability is just a product which is just the nth power of that number. Okay But these are our assumptions. It's a little bit of probability, but not a big deal. I think Good and now we put these all things together So what is the probability distribution at time t after k collisions? Well, that's this gadget here, which you recognize. This is precisely the Poisson factor Right, and then finally what is the probability distribution at time t after arbitrary many collisions? All you have to do you have to sum these things up That's what you do here, and you see everything works out quite beautifully namely. What is this? This is just the exponential of this matrix up here. I mean I shouldn't say matrix I mean I have to apologize for me operators always matrices. I Hope you don't mind So so we have this this this matrix up there, okay So So what is what is then the story about the cats model the cats model is just the study of this time evolution, you know this Yeah, by the way, we said gamic was one. That's simply a choice of time unit and The whole story is just starting this operator Which looks actually exceedingly simple. Okay So in some sense what you can do now is You could in principle forget everything what I've told you so far and just concentrate on that That's where all the mathematics is Okay So now Microscopic reversibility allows you to look at this operator on the Hilbert space and this Hilbert space at the Note by L2 of the sphere with sigma n now What is this measure here sigma n sigma n is just the uniform Probability measure on the sphere I would like to normalize. Yeah Let's make things very convenient. Okay, and then We also fixed the energy Why do we call it make the energy equals to and that's very reasonable because we assume that per particle Now rich we have an energy of unit one, right makes it is reasonable Okay, so therefore what you have is now you have this operator on this space L2 and Yeah, and All you have to do is to study this and let me just mention I know this is extremely primitive. We could have done a little bit more We could for example do three-dimensional collisions as Explained it here So Yeah, I forgot to say Let me let me let me go back here to this to this expression fix a particle I now When you think about how many other particles are around with which this particle can collide well There are n minus one particles around so when you take the rate of collision the rate of collision of a particle I with any other one of the others is to Mean two is not important, but what is important is it's independent of n and this is known as What is called the grad limit in the Boltzmann equation? If you want to derive the Boltzmann equation You have to assure that the rate of collision is kept constant as n goes to infinity and that involves a certain limiting procedure and I also adopt the same procedure here. Of course here. It's much more elementary Now the next observation of course is obvious the evolution is linear and you admit to me It's nowadays very rare to have anything interesting to say about the linear evolution We will see Thirdly we can generalize this thing to momentum preserving collisions in R3. How do we do this? I think you have learned this from Laurent de Villette Here is the collision law, right? Omega is a vector in s2 These are your pre-collisional velocities vi and vj. These are the post-collisional velocities vi star vj star. I mean I think you'll all choose prime iu star. I hope you don't mind, okay, and And then what is the Rij? The Rij can be written as the integral of s2 This is the scattering cross-section if you like and then you stick in here your test function phi okay, and principle you can do that and I will mention later some important work by Stefan Micheler and Klemmer-Muho who actually analyzed such kind of problems In detail, but this is very this you see if you would like to have a realistic Scattering cross-section like hard spheres. This would be proportional to vi minus vj magnitude I don't want to consider this. This is much too difficult for me Stefan is the expert of this business, right? Okay, so I keep it primitive one-dimensional Row of theta. Where is it? Here a nice function take it smooth whatever and However, take it symmetric. Okay, so these are these remarks which I wanted to make here So a little summary here We look at this evolution f0 is your initial condition. We can take it in L1 that would be desirable after all It's a probability distribution This is your power series if you like you can write it this way This is the master equation according to Katz and The Rij have written here once more qn star equals qn and here is the collision. So this little slide Summorizes all what is what there is in mathematics about this model. Okay, nothing more nothing less good So now why consider such a simple linear model So the first question which you can ask yourself is well, what happens actually in the limit when n gets large you see I Think Stefan you would agree with me the Boltzmann equation has not really been derived in a satisfactory way right and What do we mean by derived derived in the sense of? Deriving it deriving the Boltzmann equation with my Hamiltonian many-body system There's this work by Landford. There's also some beautiful work of Ilner and Pulverenti, but these things only work for very very small times Mean pull the rent is different. The point is these are works Because we have to control the collisions and you have to make sure that you have only very few collisions That's what ours the assumption is behind. So we don't really know much how to derive it But I would like to explain then eventually that you actually can derive from this simple model an Equation which looks like a Boltzmann equation Now you can say well, this is not really satisfactory after all it should be Hamiltonian mechanics That's the fundamental thing, but at least we will convince ourselves that Based on the simple model with very sound probabilistic assumptions You can derive for large and an effective equation, which looks like the Boltzmann equation Important will be here the notion of propagation of chaos. That's an extremely important notion nowadays I mean when you look for example at this work of yow er der Schlein on On the gross Peter. Yes. Kilimit. It's all about propagation of chaos to show such kind of things And you will this is actually where this whole story started with cuts cuts Was the first who really wrote down in a precise way what we mean by propagation of chaos and you didn't call it chaos He called it the Boltzmann property. Okay, we will come to that Now the other thing you can start in this model is approach to equilibrium Right that this is a kind of a mystery the equilibrium We would also have agreed that this room is more or less an equilibrium although So it's one of these things which we would like to talk about but we don't really know what it means in a real world But here in this connection. We know what it means. We will talk about it And also what we would like to do is we would like to find quantitative rates of approach to equilibrium How fast do you converge to equilibrium? And there's various things which you can do you can use the notion of a gap and you can also use the notion of the Entropy and that's going to be somehow the main part of my lecture series Okay, so let's look at it a little bit closer Let's try to understand In quotation the Boltzmann equation, so how would we do that? So we take our time evolved state it starts with some F zero and what we do is we multiply by a test function of a single variable Of course that you can then write simply by integrating out all the other variables over the sphere as The marginal here times phi integrated from minus squared of n to plus squared of n that's obvious right because think of it as a sphere right You integrate you fix one variable V1 and integrate over all the others now the v1 remember to some of the vi squareds must add up to n so you have integrated over all the others so the Variable which is left over just runs between minus squared of n and plus squared of that's it Likewise, you can take a function of r2 into r and you integrate now To figure out the two particle marginals that's given by this formula And you integrate of course about v and here over v and w v square plus w square less equals n Okay So now here's a little computation which you can do. It's a very simple computation namely you calculate the derivative of this and What you get is this complicated? Well, it's not so complicated. It's it's this formula. That's very easy to do There's absolutely no problem with that. Okay, namely what you have to do is simply Plug in the derivative take the derivative here plug in the formula for the time evolution Do all the integrals nicely? And that's what you get By the way, I forgot to say the f for me is always a symmetric function in the velocities Right because after all it doesn't really make sense to distinguish the particles Okay, so this should always be a symmetric function. I forgot to say that Okay So now you see this derivative here turns out to be this Complicated this expression here and you notice here you have the two particle marginal now if you knew That f and two of v and w would be a product Approximately Then you would say that this f and one this one particle marginal would satisfy This equation which is kind of a Boltzmann type equation, but of course in this context Okay, but of course it's not really true that that this holds which just doesn't and This is where the notion of propagation of chaos comes in So we talk about a sequence of probability distributions fn of v How should you think about them? You see This is now a sequence of probability distributions where the number of variables starts growing So the first element is a function only if one variable the next element in the sequence is a function of two variables Three variables and so on. Okay, so the end runs now from one to infinity capital M now we say that this sequence of probability distributions are chaotic if This limit here. Namely what you do you take your fn of v Okay, which depends on n variables you multiply by a product of such test functions Phi is a function of one variable only It is product and you take the limit as n goes to infinity first of all the assumption is that this limit actually exists and Moreover you call the sequence chaotic if this limit here is just Where you integrate your fn of v over one single function You integrate it and then you raise it to the power k Okay, so what this is saying in some sense is this tells you that this sequence of probability distributions are asymptotically if you like independent you see you have two little difficulties here You don't really find any independent probability distribution on the sphere right why for the simple reason There's a sum of the vi squareds have to add up to n So it's never going to be independent The other difficulty will you will have is that the cuts evolution really doesn't respect Independence should there be independence Anyway, so here is your definition okay So asymptotically you should think of this fn of v when n goes to infinity It looks law more and more like an infinite product function, but that's of course very loose now Remember I assume that these limits all exist So therefore what we're assuming here is that when I integrate out the s of all variables except the first one That this integral is given by a function f and this function will call the limiting one particle marginal Okay So that's the notion of chaos right is there a question to that so this is an important notion So now what's the point well, let's get to an example Let's take the integral over the sphere of the constant function and I take the product of this Psi of vj so now what you do you start integrating over your sphere So we integrate over all J's equals k plus one up to n Over all these vj's squared so that this is equals n minus the sum of the vi squareds But I runs from one up to k those you guys you fix right? Agreed and then you have the function one and then you have to integrate this over Dsigma now, what is the dimension of this sphere you have lost That's an n minus one minus k integrate over that and You know what you usually get this is the gadget of the following type n minus the sum of the vi squared I equals one up to k And then you have a power here, which is roughly this one Okay Something like that should come up Okay, it's one of these integrals which you have to do Over the angles and then you have also a Factor here, which is SN minus k minus one the surface area Okay Now remember what you then do is you multiply by the product of these five vj's and then you integrate over the rest and Then what you also have to do is you have to divide this by one over SN minus one squared of n the surface area So you have to divide this by the surface area and now you see Unless I screwed it up Presumably I forgot some n's here Yeah, sorry. So this this is a sphere with this radius. So anyway when you when you pull out the n If you have done it right You should this n should scale out and what you see is an expression which as n tends to infinity Just goes to a Gaussian. This is one of these instructive computations and I guess I forgot divided by two Okay, that what comes out. This is a computation which was known. I don't know goes back to the ancients. I mean Max will knew it Oldsman, of course, why could they all these people and the French call it? I think the point really this is correct I think so. I mean what I learned from Dominic Bakri Okay, so here is this computation which you can do and you see this is quite nice, right? Because the Gaussian really nicely factorizes and you see that you get just this integral over r5 v times the single Gaussian to the power k Okay So here's an example So now here is the theorem which goes back to Mark cuts. This was the first of its kind and What I also would like to mention is a paper by McKean from 1965 And I tried to give you a little bit of a sketch of a proof of this propagation of chaos in the spirit of Henry McKee So what does it say you start out? with a chaotic Sequence of probability distributions, huh? So this is this gadget and remember what does it mean roughly it means that a Sympathetically for large and this probability distribution sort of factorizes into this limit product of limiting margin Then now what you do you take this this this probability distribution and you evolve it under the Katz master equation You get this function here and What it turns out is that this function is also chaotic and it's limiting Distribution should say I know limiting one particle marginal f of vt Satisfies the Katz Boltzmann equation, which I've written here Notice here. This is the limiting marginal of your initial condition And this is the limiting marginal of your evolved state So so in some sense, this is an extremely clever way of solving the Boltzmann equation, right? Namely, what would you have to do? I mean if you could do that, right you start out with your initial condition you try to construct a sequence of one of of Probability distributions in n variables So that these guys are chaotic with this limiting marginal Then you take this probability distribution and you evolve it Then you pass to the limiting marginal and sure enough you get the solution of this differential equation It is this integral differential equation It's it's kind of this this is kind of clever, huh? So this is what Katz proved in 1956 and he also made a sort of a I don't know a cheeky remark by saying you see it's because of this theorem You see the time evolution here is totally trivial It's linear everybody knows how to do that and he said therefore To worry about the existence of a solution of a nonlinear of such a nonlinear equation isn't very interesting, right? That's what he says All right So what I would like to do so that everybody understands the theorem, right? So you start out with a chaotic sequence you fix your time t you evolve this chaotic sequence You get another sequence This sequence turns out to be again chaotic and it's one particle marginal satisfies the Katz Boltzmann equation, okay? So Let me try to give you a sketch of a proof Now I was wondering maybe it's better to do this on the board because things go usually too much too fast So what do you have to do? Maybe maybe I do it Few steps so you see the first step you would say is I mean this is a difficult gadget, right? Because this is your Fn at time t which is an Evolution applied to your chaotic state So of course, it's very natural that you you put the evolution on the other side which you can do You just have to think of this functions a function really on the sphere s and Minus one You can do that. So you push it over here. It is and now you have to sort of show that As n goes to infinity this whole gadgets deteriorates into a cave power of something if defy is Attends a product of functions of single variables That's what you have to do Okay, and it looks a little bit difficult well here, of course you could involve Invoke that the product structure because you know that when n gets large this sort of factorizes, right? But you see what is the problem? The problem is here that this function has more and more variables Right because what does this Qn do it? Yes, it did this this phi has only k variables But this Qn applied to the elf power to the phi produces more and more variables. So these variables grow Gamma phi of v1 up to vk plus one is given by this gadget here. So what do we do? we take the phi and Pick each vj and what do I do? I do a scattering with an imaginary particle I lost this one. Thank you. I was wondering It just sounded like my steps my voice This cocaine yeah, all right So that's what you do, right? So so this this is just a sum j less equals k and what you do you add on one variable minus phi okay, so So now Remember I always assume that these things are symmetric So now look when you do this and you you integrate of all these variables is additional You can replace this guy by this guy here and what is this one? It's just this gadget here All right, so this looks complicated But it really isn't because all I have done you see for each when it let me write this down When I take n times Qn minus the identity applied to phi What do I do? I? get two Divided by n minus one and then I have a sum I less than j And then I have the Rij applied to phi Minus phi We can write it this way right So let's think for a second you see The eyes here They can run over all the variables which I've been here already The J's can also run over all the variables which are if you in here already, but there could be there are of course many more Right. I mean case fixed n is huge 10 to the 26 So there are lots of additional variables, but you see since I'm assuming that These functions are all symmetric I can clean things up in the following sense that I can say well What do I do? I first take the sum only over i less than j less or equals k Right. That's what I'm doing here This is just one piece where the i and j are just run between one and k and then I stick the other ones all together Into this gamma and since you see it doesn't really matter when I have j equals 50,000 say case 5 I have j equals 50,000 this variable Gives me the exact same contribution as if I take j equals 49,000 or whatever the Greek because of symmetry It doesn't really matter how I call these variables So therefore you can actually simplify this whole expression here and write it in this fashion so that you get that this G is just the sumber here and The gamma remember what does the gamma do it adds one more variable But then of course you have to have lots of those gammas namely n minus k divided by n minus 1. Okay That's what you get. I mean this is a step which falls exactly like what cuts was doing Okay All right. So now you need a little lemma the lemma says That when you take g applied to phi you take the L infinity norm That's less than 4k times the L infinity norm of phi The gamma likewise and the difference of the two actually is 6k squared divided by n minus 1 So in other words this difference here goes to zero as n goes to infinity Now this is certainly not a big deal. So first of all the gamma estimate is a total triviality. You agree. Why do we see that? Here's the gamma I take the L infinity norm. What do I get? Well, I get twice times K and Those guys the minus sign ignore I just take the L infinity norm of those guys gives me another two and then I have The integral of d theta the row theta that gives me one Okay, that's easy. You have to work a little bit more For the g well, you just do it right you get 2 over n minus 1 here You get an n choose 2 a k choose 2 times 2 Plus this guy times 4k you believe me that you can add these numbers up and you get this 4k Finally when you take the difference of g and gamma you see that you have got here n minus k over n minus 1 minus 1 Now that of course goes to zero as n goes to infinity Here you're in very good shape because these terms are just two times k choose 2 divided by n minus 1 This is perfect goes also to zero. So you certainly can believe me These three estimates they're very elementary Okay, I Mean I've written it out here for a moment so that I mean these slides are eventually on the web I think and so all the details are there, okay So this is complete elementary now lemma 2 this is a little bit less elementary. I mean conceptually you see You have a function which has k variables one of the k came certainly less than n and Now you would like to understand something about the powers of g at the powers of gamma and the difference of the powers of gamma of g and gamma Okay, so how can you deal with it now? You see this is a Yes, this is immediately obvious right why you see When you apply the g to phi what happens you get a new function which has now an additional variable So the first estimate gives you a k times 4 Now when you apply to the g again Since you have an additional variable you get another 4 times k plus 1 the additional variable you get another k plus 2 times 4 And so on so you do this l times. That's what you get And now you see the bad news is these things now start growing in L The next thing is the gamma to the power l you do exactly the same. It's completely elementary, right? the last one You see this is not so clean right? I'm saying there exists a constant C such that this is equals bounded by that and they should put here the l infinity norm of phi So why? Let's just do it Because it's pretty straightforward so you take this Usual telescoping series right mean what else and now you notice that when you take the l infinity norm What you have to do you say you have to sum up this l infinity norms And now again remember what do these gammas and this G do they add variables? So what you do is you get From this guy here on this guy you get this Product from this guy you get that product and the beauty is that you have this g minus gamma Which sticks this guy in between? right It's not a big deal and now that's some you can clean up and what you get is this estimate It is this formula here and now you have to figure out how big can that stuff get well K plus l minus 1 factorial. Let me divide this by L factorial and then you use Sterling's formula. Is it I or E? I never know Do you know Sterling or? I mean Maybe I've written a red to many of the much of the financial times because there's the pound sterling And what we mean is sterling. So I think it's an idea. Okay, sorry about that Okay, so here's So you use sterling's formula here if applied it capos l minus 1 to this funny power e to the minus k minus l minus 1 etc and Now you see what happens is you can factor out this l and What you'll what you see most of it cancels out the effort l to the k minus 1 here that this gadget as l tends to infinity Produces a precisely an e to the k minus 1 which kills this e So you'll learn certainly know that this gadget is roughly is less than the constant terms l to the k minus 1 That's all okay So far you agree is all very elementary and now finally remember you have to sum in here This sum of course only grows like l squared which actually proves this estimate Remember I divided by L factors. I have to multiply it back Okay So so these are just straightforward forward elementary lemmas, but let's just see what this gets us As a corollary right when you now sum this up what you get well you see This is the L factorial in this estimate which we had Where was it? Cancels out you have an L to the k plus 1 But you have also this exponential factor 4 to the L that that you have to beat and that's the reason why you have to Assume that T is less than a quarter and when you do that You see that this is less than the constant times 1 minus 4 T k plus 1 this gadget Main point this goes to 0 as n goes to infinity Good and now comes an observation of McKean who pointed out that this gamma is a derivation Namely, what does it mean when you take your functions V1 phi of V1 up to Vk times psi of Vk plus 1 to Vm And you apply the gamma to it. That's the same as first applying the gamma to phi There should be a phi here. Sorry times psi plus phi times gamma of psi That's kind of a very nice observation And now I think we are ready to put things together all right so You fix the time less than quarter right I mean after all the sums should be finite this So remember this is the gadget which we are interested in so what is phi? Phi tensor k simply means it's 5v1 5v2 up to 5vk so So this is the limit which we would like to compute and you want to see whether this limit here can be written as the limit of a kth power of This fn of V integrated against a single phi That's what it means propagation of chaos So now what do we do? Well Remember what we did we pushed the time evolution over to this phi tensor k now our lab was actually showed that the first the first remarks showed that this Evolution on this phi tensor k can be written in terms of the g to the power L Times the sum What we also learned is that in the limit as L n goes to infinity this g to the L you can replace by gamma to the L very good and Now I use the fact that fn zero of V is actually a chaotic sequence and remember what Therefore what is this is an integral from r to the k plus L? Remember this is a function of how many variables. It's a function of k plus L variables All right, so therefore what we get in the limit as n goes to infinity r to the k plus L f tensor K plus the k plus L fold tensor product of f with itself And then the gamma L on Phi K times tensor with itself And then they integrate the k plus L And you notice you see when the L runs these spaces get bigger and bigger and bigger bigger, right? But that's just what the way it is Okay, and this uses really heavily now that the initial condition was chaotic. All right So now we have to massage this last term and Here's this observation of MacKin now we can use the fact that this gamma L Gamma is a derivative a derivation I should say So what is this mean? See this is the gadget which we're interested in So because gamma is a derivation You can actually write this in the following way you see the gamma you can write as gamma to the L One times gamma L to a gamma Lk and here you put this combinatorial factor And then you sum L1 plus plus Lk equals L, right? That's because it's a derivation. This is just if the Delightnitz rule which apply Nothing else. Okay. All right. This is good But now what we do is we can split this stuff up, right? The L I can write as a sum of the L L1 up to Lk and do that and then what to get to get F tensor 1 plus L1 gamma L1 phi then You know F tensor 1 plus L2 gamma L2 phi and so on, right? And up to up to the last one and you see What you notice is that this integral now factors into integrals of this type and that's all because Gamma is a derivation. I think this was a very nice observation of my key makes the thing You see when you read cards, it's not so transparent. This is makes it much more transparent. Okay, very good And now this whole sum is actually of course now nothing but This sum to the power k Just like when you show that the product exponentials is actually the exponential of the sum, right? Okay, and now you see what we have done so far. I can actually reverse the steps Namely this gadget if because it's nothing but the limit as n goes to infinity of that gadget To the power k and here is a single particle function. Okay, what does this show? This shows propagation of k So therefore what we can now show Let me go back To this formula, which I did the very beginning here You see when you take chaotic initial data It's actually fairly straightforward to show that these limits as n goes to infinity of f and f and 1 actually exists And when your initial condition is a chaotic then you know that this limit here is actually really the true product of these two limits That's what? kill propagation of chaos is all about so therefore We have proved this this theorem here, right? Proofed was a sketch, right? I would have to show that this limit exists etc. Etc. But that's that's straightforward Okay, so now let me remind you again. What is nice about this? It exists an existence in for the cuts Boltzmann equation In fact you can show that's what McKinney that the cuts Boltzmann equation to solution if it exists is actually unique This is the existence part you also notice by the way when you take any given f in L1 You can certainly find a chaotic sequence with f as its limiting marginal just write this down here. It's a little bit I Mean it's not entirely trivial. You see it's nicely normalized on the sphere. Yeah That is actually chaotic needs a little bit of work, but this can be done. It's not really a big deal So what do we do? We solve the master we start out with our limiting marginal We write down this f and zero which is chaotic We solve the cuts master equation with this initial condition The f and t is chaotic with limiting marginal f t This guy size satisfies the Boltzmann equation. Okay All right, so this is a rough outline Why we are interested in this You have this linear equation in high dimensions You have the effective equation the Boltzmann equation in lower dimension However, nonlinear and this whole field in some sense lives off this back and forth between these two pictures, okay, and So the plan of my my course is next time to study more closely the approach to equilibrium There is there are several avenues one is the gap of the operator That's very successful Except as it will try to point out to you at the end is totally useless This usually happens you work very hard, right? You get something done and you're happy and then you really stare at this and then you find out section totally useless Okay, but anyway, so I'm not going to prove anything because it's totally useless But I just show it to you Now there's the other approach which uses entropy and that's the main part of this of my course It turns out in the context of the cuts model. This has not been very successful either But it leads some interesting mathematics and this kind of mathematics will be important for later Which I'm going to show you some successes too. Okay? All right, so bear with me the next few hours the next hour is maybe a little bit Not so successful, but I hope then the other three hours are better. Okay. Thank you for your attention