 I would like to thank the organizers for giving me the opportunity to speak and also for inviting me to visit IHES, Frank, Ivan, Fabrice, everybody. I'm having a really great time here. I'm sorry that I couldn't stay longer, but I find the place absolutely beautiful and very stimulating for my discussions and work. So I will talk about Boltzmann, homogeneous Boltzmann equation in so-called non-cut-off case. And I will tell us in a moment what do we mean by a non-cut-off case. It's going to be related to joint works. So first is joint works with Ricardo Alonso, Irena Gamba and Maya Taskovic, and the second one is joined with Irena Gamba and Maya Taskovic. And we all affiliated with UT, or we were affiliated with UT at one point or another. Ricardo was Irena, Irena Gamba is my colleague. Ricardo was the previous student of Irena Gamba. And Maya Taskovic is a student who just finished joint student of Irena and myself, thanks to whom I got, thanks to whom, and Irena I got interested in this. So this is a very new topic for me. So I will try to tell you a little bit what we are learning. Okay. So first we will recall the Boltzmann equation. So that's exactly the reason I stayed away. I managed to stay away from the equation for a while just to write it somehow. I always thought it's quite a complicated equation. So I will try to slowly introduce what the main actors are in the equation and what's going on. So then we will talk about L1 bounds on solutions. So we will not consider the question of well-posedness of the equation. Instead we will assume that we have a solution and we want to study behavior of those solutions. In particular, first we want to study moments which are L1 weighted norms. They can be weighted by, for example, polynomials or more generally by different functions. And the important one for the Boltzmann is exponential weight. So I will tell us about especially exponentially weighted L1 bounds. And then hopefully if I have time I will tell you also about 0.5 bounds L infinity bounds on solutions to the Boltzmann equation in a non-cut-off case. Okay, but let's start slowly. So the Boltzmann equation, so let me see how this works. Okay, here it is. So it's for the position here partial with respect to t of f plus v dot the pointer. Okay, v dot the gradient xf is equal q. So what's going on here? So first what are the, where are we? So x is a position in rd. T represents time and v represents velocity in general in rd. So the equation describes the evolution of the density of gas particle. So density is denoted by f. It depends on position, time and velocity. I didn't want to write q on the first slide because it was scary to me and it's still scary I guess. So q is a, but let me tell you a few things about q. q is a quadratic integral operator and it expresses the change of f due to instantaneous binary collisions of particles. So the equation was introduced, my understanding is by Marx, well feel free to correct me, and Boltzmann themselves in late 1860s and I think Boltzmann paper first appeared in 1872 as a model that describes the dynamics of a dilute gas. To make it in the context of what we have been seeing during this week, one can derive the equation from many particle systems using classical limits and that was pioneered by Landford in the 1950s and nowadays that's a very alive and very interesting area which is very well developed especially in France with Laura Sanderman, Isabelle Gallagher and her collaborators working very hard on that. So this is a little bit in the context of, that connection is in the context of what Rupert Frank mentioned this morning, understanding effective equations from many body particle systems but in a classical setting. On the other hand there are connections with Navier-Stokes equations whom we have been hearing also about through hydrodynamic limit and which was rigorously proved by again Laura Sanderman that was maybe between 5 and 10 years ago. Okay, now what's really going on and we cannot avoid talking about Q, so let's see what it is. So first I want to denote here by V' and V' star pre-collisional velocities and it's relevant to consider their relative velocities so U' is relative pre-collisional velocity. So actually I have this, sorry let me try this. So V' and V' are pre-collisional velocity and then you can form their relative velocity and after collision you denote by V and V star post-collisional velocity and also you can form the relative post-collisional velocities. So what is relevant, you have for elastic interactions at least you have conservation of momentum and energy you can ignore this for now because I'm really not going to use it at least I will not reveal that I'm using it here but what is relevant in order to introduce Q is you can form unit vectors for the relative pre you can just consider unit vectors for the relative pre and post-collisional velocities so here they are denoted by sigma as you see and you have and angle between them is theta. So now having them you can introduce the operator Q in the following way, the collisional operator is defined as it's a binary, as you see it's a quadratic operator okay you have to do this, so it's a quadratic operator so what you do here is we are using the standard notation if we put F' that stands for the F at evaluated at V' and also for example F' star stands for the F evaluated at pre-collisional velocity V star. So what you have here is F' F' star minus F F star and then you have so-called collisional kernel B B itself depends on the magnitude of the relative vector U and in most cases only on the angle between if I go back to the picture let me try, oh sorry oh sorry, of course here so it depends on the magnitude of vector U and it depends on this angle theta between pre and post-collisional relative velocities okay so in particular in relevant physical applications often B has the following form it depends on the as I said the magnitude of U so this is kinetic part of the kernel so it's absolute value of U to let's say a power gamma and then there is an angular part of the kernel B that depends on the angle theta okay now depending on the gamma we have so-called soft potential case so then think about this also gamma is negative so this is introducing extra singularity in the kernel so we expect that that should be somehow harder so when gamma is between minus D and zero that corresponds to the soft potential case then when gamma is equal to zero that's so-called Maxwell molecule case and when gamma is strictly bigger than zero and less than or equal than one that corresponds to hard potential case and what we consider today during the talk is the hard potential case gamma is strictly bigger than zero and less than or equal than one okay so the equation is pretty as I'm sure everybody's aware there are many open questions related to the equation from the derivation all the way to studying well-posedness so it's sort of reasonable to consider certain simplifications which still carry a lot of difficulty with themselves so one simplification is to consider space homogeneous case in other words the density now does not depend on the position X it just depends on time and the velocity V so this is independent of X so this is one possible simplification the other simplification which is often considered in the literature on Boltzmann equation is Gratz-Kathoff case so called Gratz-Kathoff case so what do we mean by that? in I think 1963 Gratz had a paper where he proposed to consider kernels with bounded little b so if you look at the collisional operator b and angular part little b he proposed to consider b which is bounded and what he said that is just one possibility there are other possibilities and one of them is that b is integrable in the following sense so if you integrate little b you basically here it's just change of writing it in polar coordinates you have that that's finite so this assumption is the one that is often found in literature on Boltzmann equation so why did people do that? so actually they tried to justify to say still it was hard enough to look at this special case but also there was this belief that if you can do it in this way that you should be able to get it in the general non-Kathoff case more or less in a straightforward way however it turns out so basically there was a belief that the nature of the solution is not affected by this assumption however there are mathematical arguments there is mathematical evidence actually that in the non-Kathoff case situation should be better which is although it's harder but it's supposed to be smoother so in particular there are some works there are many more references that I'm not putting here so I'm putting those that are just most relevant to what I want to say so there is work of Lyons from 94 and then a sequence of works I'm mentioning just some of them Devillea, Devillea Gaul said Devillea Weinberg and very recent work that inspired us to do second part of what I will tell you of Sylvester where they show a presence of certain regularizing effect almost like a smoothing effect in the case of the non-Kathoff so then that tells us that non-Kathoff is the work of studying no matter that it's difficult one should be able to get actually something better there eventually and the other thing is there is a mathematical challenge to consider non-Kathoff which itself is not justification to study but it's interesting so in the case of a cut-off the collisional kernel Q can be written as gain minus loss so basically you remember that the difference of the two terms they can live independently each of them is finite on its own so that simplifies calculations because you can study gain and loss terms completely on its own often loss term is easy to evaluate and on gain you have to work hard but that's sort of a value that many of the papers that assume cut-off pursued okay so what I want to be interested in in this talk and also in the papers that we wrote on this we are interested in the Cauchy problem especially homogeneous Boltzmann equation and we will look at the case of a non-Kathoff and I will be precise about what type of non-Kathoff we consider in a moment but I want to mention so going back to the equation that we study is delta Tf is Q of ff and it's a there is no dependence on x so I want to remark that in homogeneous case is much more involved and very interesting for example in the non-Kathoff case global in-time existence of classical solutions close to the equilibrium has been proved only relatively recently so there is a very important work of Bob Strain and Philip Grasman and their collaborators I'm putting just one reference here and also Alexandre Morimoto, Kaik Su Young and various other references so there is work on in homogeneous equation so just to be aware of it although we are not going to look into that keep switching between them so now going back to homogeneous case Gras existence of solutions has been studied first in the Gras-Kathoff case so that goes back to actually 1930s so Carlemann-Archeryd de Blasio existence of global non-negative L1 solutions under some sort of physical and natural initial data and also Micheler and Wemberg in 1999 existence of a unique global solution so they got uniqueness also global in-time for initial data invaded L1 space on the other hand if you look at the non-Kathoff for the existence it's it's much harder and there are works by Archeryd in 81 Houdon, Villani, Alexandre and Villani where they proved existence of global big solutions but they are big solutions okay okay oh sorry I keep doing that because I'm using both so now I want to tell you a little bit about what we have been doing here and this is the part where I will tell you a little bit about L1 bounds in other words moments of solutions so the analysis of L1 weighted norms of solutions so now I'm taking for granted that I have a solution so we want to look at the L1 weighted norms of solutions to the Boltzmann equation and you might ask why, who cares why is it relevant okay so first you can talk about perhaps historically and for physical reasons people look at the weights which are polynomials so L1 norms weighted by polynomials so they are very natural for this equation because they represent moments for the probability distribution since F itself is the probability distribution so this is natural thing to study and that's what people did first so they considered polynomial moments of solutions but then with the pioneering work of Bobilev in 1997 I believe he was the first one to introduce so called exponential moments which are exponentially weighted L1 norms of solutions so you might say why exponential weight so static solution so equilibrium for the Boltzmann equation is Gaussian so therefore it's natural to study exponentially weighted norms and with the hope to say something maybe even using them related to convergence and so on so those are the objects that I'll tell you a little bit more about so now first rigorous definition so what do we mean not completely rigorous because I'm not telling you what type of solution I have but imagine that solution is nice enough that we can define them so let us look at the solution to the homogeneous Boltzmann equation polynomial moment is exactly as we said L1 weighted polynomial weighted norm here I'm writing Japanese bracket you could put Japanese bracket or you can put absolute value so those are really polynomial moments and the exponential moment of F of order S and rate alpha is defined to be L1 norm in V which is exponentially weighted so what are the questions that you might want to ask and that people asked so the first there are these those are two basic questions propagation in time of either polynomially or exponentially weighted L1 norms meaning moments are finite for all times if they are finite initially you start the initial assumption that those polynomials are finite and then you want to prove that they stay finite and there's another type of result in the literature you often call creation or generation of moments just in say instantaneously they create themselves and they stay finite without assuming that was the situation initially okay so we can of course one can write that rigorously but since I want to tell you a few more things I will not do that right now so there was extensive work on polynomial moments going back to Albert de Villeneuve so it's 90s 80s 90s and propagation and generation of polynomial moments were shown both in the cutoff and non-cutoff case okay now how about exponential moments so I'm going back to in order to tell you how people studied exponential moments generation and propagation I want to go back to the definition so if I if I can make this work sorry I would simplify my life but no anyway here's definition so that's l1 norm weighted with exponential l1 in v now formally think about the following and this is the idea that Bobilov made rigorously in 97 use the Taylor series for the exponential and so this would be the same as you will get alpha2 power q dummy variable index is q alpha2 if you divide it by q factorial and then you would have v absolute value of Japanese bracket to sq but if you integrate if you integrate here that's exactly the polynomial moment so in other words you can express exponential moment as in terms of some ability of polynomial moments and that was the crucial stepping stone in studying exponential moments and that tells you sort of what are the methods you want to study convergence of that sum so people started identifying certain decay or partial sum method and so on okay so here is sort of picture what is known up to now so in the grad cut of case term by term method meaning you want to identify some decay it goes back to Bobilov Bobilov, Gamba, Pantherov, Vilani and Mukho I'm not telling you exactly who moved the index where but this is sort of the picture how it is and then a relatively recent work of Alonso, Kanizo, Gamba and Mukho in 2013 where they studied partial sums and the picture is in that last paper is in the grads cut of case that there is generation if s is order of exponential so you can generate up to order gamma and you can propagate moments all the way up to 2 and 2 is natural because again Gaussian is the equilibrium for the equation it's a statistical solution in the non-cut of case there was only one work we started working on this and then we realized there was a work of Lou and Mukho in 2012 who did this term by term so called term by term method they were able to obtain generation up to gamma so then you can imagine sort of natural question is what you can what can you do beyond gamma and that is the question that we ask ourselves so what can be said about the behavior of exponential moments of order s bigger than gamma in the non-cut case and motivation for this is you just of course they are natural from the point of view of how we interpret solution but the other reason is as you will see L1 bounds are often stepping stone for getting pointwise bounds and getting pointwise bounds having them is a pretty strong thing to have around so I will tell you a little bit about our work on exponential like moments with Ricardo, Irene and Maya so motivated by the above question we consider especially homogeneous Boltzmann equation without grads cut of assumptions or non-cut-off and for the hard potentials so gamma is bigger than 0 and less than equal than 1 we study behavior of its exponential moments I am lying a little bit here but because as you will see we actually had to define a different type of moment which is related to exponential moment in order to do something in the non-cut-off case and that's exactly sorry I didn't get the third point on the slide that's exactly what I'm saying here so to study exponential moments of order beyond gamma we were somehow forced to introduce a new type of moment which we call metaglyphoid moment so I will tell you a little bit what they are so let me tell you about what they are and then I will show you parts of the proof from which you will see that sort of at least the way how we wanted to prove it we were pushed to introducing these creatures so recall the exponential moment of order s and rate alpha no I have to do it by hand here so here it is so in certain calculations that we do in order to prove this so how do we usually get these estimates you want to get some sort of ODE for the polynomial moments and then from that ODE you would like to get ODE for exponential moments exploiting sign decay and so on so in order to get decay we looked at that expression but it was useful to us to allow instead of integer Q factorial in the denominator because Q factorial is just the same as gamma of Q plus 1 gamma function so we were forced to look at gamma functions which would possibly non-integer argument so somehow if we knew how to study these objects which gamma functions with non-integer argument we would be able to get something so we asked ourselves what is this why do we need non-integer so our student Maya said she searched a little bit and she told us there are these metaglyphal functions and Renae and I are maybe if we lived many years ago we would know about that so what are they so it goes like it's sort of interesting because it reminds you of physics books and so on so we define metaglyphal moment using metaglyphal function so what is metaglyphal function it exactly does what we want it to do so it's like Taylor series you have X to the Q but instead of allowing when A is equal to 1 this is gamma of Q plus 1 so that's exactly Q factorial but now you allow yourself freedom for A to be it does not have to be integer so they asymptotically for large X they behave exactly as exponential functions so that's very nice and you can write now I will not lead you step by step through this simple calculation you can write exponential function in terms of the metaglyphal function for certain A and for certain S so that's then we define metaglyphal moment of a solution to the homogeneous Boltzmann equation as L1 norm in V which is weighted by metaglyphal function so it has a feel of some sort of really physics type thing from many years ago so then we would like to represent that as people did for the exponential moments you now use the formula for the metaglyphal function and you want to think about it as some ability of moments and here it is you can by using the formula for the metaglyphal function it becomes really this power of to the Q divided by gamma so it can be written as polynomial moments alpha to a power and then gamma which is exactly what you had for the exponential moments with the difference that now you have this 2 over S which I called A few minutes ago does not have to be an integer ok so why is this helpful and what can we do with it so angular kernel that we consider we assume that for some R between 0 and 2 the following weighted integral is finite so you have an angular part of the kernel and then you have sine to the R which turns out to be this so what does this mean what's meaning of this assumption when R is equal to 0 which we do not consider our work it doesn't go there but if R is equal to 0 this is exactly grads cutoff case ok on the other hand if R is equal to 2 if you look at this this is a typical non-cutoff what people look for the non-cutoff case and we work in the regime when R is between 0 and 2 so depending somehow on the singularity of the kernel we get results which exactly depend on this R and the results are paraphrasing this is really not the theorem it's just telling you what we obtain so under the full non-cutoff assumption so this is the full non-cutoff assumption R is equal to 2 if you want we provide a new proof of generation of exponential moments so they generate up to order gamma and Lu and Mukho had a different proof and we did modifying the partial sum method which was recently put in the work and on the other hand then S is so that we do on the full non-cutoff assumption using the modified partial sum method we show the propagation of Mitter-Gleifler moments we want to go beyond gamma you remember that was the goal from the beginning so when order is between gamma and 2 we obtain propagation of moments Mitter-Gleifler moments in the following way when S is between gamma and 1 we can assume the full non-cutoff so when S you can allow yourself full non-cutoff if you are not too ambitious because you are just getting that S goes up to 1 keep in mind S is 2 corresponds to Gaussian now when S is between 1 and 2 we have to assume a little bit more than non-cutoff so I'm calling it stronger non-cutoff assumption and this is non-cutoff assumption that exactly interpolates in terms of this power of R between cutoff and non-cutoff so I'm moving very slowly I don't want to give you the whole proof but I want to explain why it's not that we were bored and introduced and wanted to read about Mitter-Gleifler functions I want to tell you why they appear so I will tell you just pieces of the proof which will give us motivation why we introduce them so the proof goes you study the partial sums of that series whose convergence you want to study and the ideas you derive are the inequality for polynomial moments but what is very crucial here we are in the non-cutoff case we cannot write the q as a gain minus loss so we have to go around and we have to exploit some cancellation so that is the first step and the second step based on odd e for polynomial moments you push that to get odd e for the partial sums e n so now considering cancellation so I will just say the following for the cancellation what is important is the following so you think about q in the WIC formulation ok so if you think about q in the WIC formulation then you have this test function phi and here I just copied that and when you have this test function phi the idea is to basically use Taylor series on the test function phi with an idea that maybe something will cancel ok and it did that happens because there is something almost like what Nader was calling and now I am going to exploit a little bit terminology there is almost some sort of now formed thing there is a certain cancellation which you can observe which was known before we are just very lucky to use it so you do the following you can decompose sigma into the u hat and sorry you decompose that sigma and go back to theta and omega and then when you do this Taylor series you notice that basically first derivative of omega disappears so there is a when you do the Taylor series and for the phi you notice that there is one term that completely disappears and that is this cancellation ok now I want to tell you why Meta-Glaffler functions so when you study the moment or the e you end up having something like this you have a polynomial moment of the order 2q what you are getting is polynomial with a negative sign very important polynomial moment of order 2q plus gamma you have 2q and you have this horrible looking sum which has products of moments binomial sum and it has this q times q minus 1 which is bothering us a lot so compared to the grad's kind of case they did not have this q times q minus 1 at all so we need to somehow figure out how to lower the power of q how to get rid of it completely so what do you do ok so I will this is not as bad as it looks but in general moments so going back to bobbler sort of the trick is you divide by gamma functions so if you divide by gamma functions that means you also multiply by gamma functions multiplying two gamma functions producing the beta and gamma of the sum so then you have the beta and the gamma of the sum the point is when there is a certain property of the beta functions that you can ignore everything here except what is red the stationary phase you can prove that this red stuff has decay and decay is of the order 1 over a to the 1 plus a but only when a is bigger than 1 so we could get decay in the sums if we could afford to have a bigger than 1 so now when you look at the role of a a was really distinct that when it's not equal to 1 that distinguishes exponential function from metaglyphal function so I will not say anything else but somehow I just want to show you how we got there so now in the next 15 minutes I can tell you a little bit about how what we did next once we were able to have some information about exponentially weighted L1 norms in the non-cutoff so I want to tell you a little bit first about in general in the framework of L infinity theory what I want to point out there are many more works but I want to point out two works the first is the work in the grads cutoff case and that goes it's a work of Gamba, Panferov and Villanime who show that if initial data is initially below Gaussian then your solution remains below possibly a different Gaussian so it's a propagation of L infinity of pointwise propagation in time of pointwise bounds on solutions to the homogeneous Boltzmann equation in the case of a cutoff an interesting aspect propagation of L infinity norms think about that and they were sorry what I forgot to mention exponentially weighted L infinity norms so it's not that only you propagate L infinity norm you propagate L infinity norm which is weighted by Gaussian so interesting aspect of this work and maybe if you think from elliptic point or from other point of view it's maybe natural but interesting point of view that I want to mention is they got propagation of L infinity weighted bounds using propagation of L1 weighted bounds some sort of lifting or Nash-Moser iteration type thing so having L1 weighted bounds they were able to get and this is in the context of the Gaussian weights they were able to push that and get L infinity on the other hand for L infinity if you do not care about norms weights so then there are results and among them I want to point out a very recent work of Sylvester I think that's his first paper on Boltzmann equation who was motivated by his background in the integral differential equations and he thought about Boltzmann equation from that point of view and as a concept and he wanted to study the regularity of Boltzmann but among other things he obtained L infinity upper bound which was a sufficiently nice solution of the Boltzmann equation and he presents this method as an alternative to really harmonic Fourier analysis methods for getting certain coercive estimates smoothing in the non-cut-off case so what I want to point out Sylvester is in non-cut-off so L infinity bounds but no weights on the other hand in the cut-off case they considered L infinity weighted bounds thanks to L1 so then sort of a question that we ask which seemed natural to us in the view of our recent results on the propagation of exponential or Mitter-Gleiffler type moments in the non-cut-off case a question is whether one can propagate in time point-wise exponentially weighted bounds in this setting so in other words extend the gamma-panther of the Lani to the non-cut-off case if you already have propagation or creation of L1 exponentially weighted can you push that and show if your solution is actually bounded by a Gaussian or by some sort of exponential of certain order let's give ourselves a freedom then can you show that at a later time solution stays below exponential of the same order but maybe slightly different constant you can make shrink it and so on which would be very nice because you start below an exponential and you force yourself to be below exponential and exponential here is relevant because it is a static solution so what did we try to do so first we try to generalize gamma-panther of the Lani and they use as I said they use L1 bounds propagation of L1 exponentially weighted bounds which we had so we said okay let's try that and they use also comparison principle for the linearized Boltzmann equation and their proof for comparison principle or less works in a non-cut-off case they present it in the cut-off by looking at it a little bit it works in a non-cut-off but then we said okay we have some sort of comparison principle we have L1 can we make this work unfortunately you cannot just push gamma-panther of the Lani immediately or at least we were not able to do it because once they use comparison principle they use it for the solution of the Boltzmann equation and for the Maxwellian itself they use it application of the comparison principle is in the fashion which really uses grad's cut-off in other words they write their operator Q as gain minus loss then they study gain using Pobsnare densities and so on but they really split it and if you are in a non-cut-off that's the starting point that kills you just cannot split them you have to do something else because they are not finite so the way how we decided to go around or remedy for that obstacle is we modified Sylvester's approach which just considered L infinity bounds so now we consider L infinity weighted bounds to get this result so this is joint work with Irina and with Maya so we still consider especially homogeneous Boltzmann equation without the grad cut-off assumption and show propagation of upper bounds on exponentially weighted L infinity norms of certain nice solutions as you will see so let me just put this in order to without too many details to tell you what we do let me introduce a new vocabulary for the notions that I already had during the talk so if you remember script M was exponential moment of F but what is this? This is really L1 norm exponentially weighted so I will use what is the natural notation and also in the same way I will use L infinity exponentially weighted to indicate the L infinity norm so our L infinity result sort of varies without any details is the following that I am putting under quotient marks that L1 exponentially weighted implies L infinity exponentially weighted but in the following sense propagation of L1 exponentially weighted results implies through the proof propagation of L infinity exponentially weighted results more precisely in other words we show that propagation and in the linear fashion so actually if you track in the proof the bounds then you can show that L infinity exponentially weighted is bounded by you can track the constants by L1 and I want to point out one thing we cannot do it for an arbitrary big solution let's say we have to we have to assume that solution is a nice in a certain sense as it was the case in the work of Sylvester so one open question related to that that might be interesting for people who study well possess of the Boltzmann is to get existence of those nice solutions which achieve certain maximum as you will see in a few moments so a little bit more about basically quasi statement of the result so what is the kernel that we consider suppose f is a nice I will tell you a little bit about this nice in a moment and assume that we are talking about angular kernel that looks as follows that little b multiplied by basically it is saying that little b is like sin you can move this on the right hand side so you have d-2 here and minus 1 minus nu and what this is really you can recognize here this is inverse power law so if even for the for example this is pure non-cut of case and I just going back to the stuff that I mentioned the beginning derivation of the Boltzmann equation from particle systems when in the case of the non-cut of and major example in the books being the inverse power law it is a very open question unless you have some other assumptions so now we assume that it is inverse power law so non-cut of and suppose that you have propagation of L1 exponentially weighted norms but what do I mean by propagation so this is initially we have L1 in velocity and the weight is exponential depending on alpha 1 and v to the s and you show that you assume that having that initially you also have that L1 exponentially weighted is bounded at all later times notice the s and s are here initially at the same order of the exponential initially at later time is the same so it is really propagation of this of other s exponential norms but alpha can be different rate can be different okay and assume that initial data lies under exponential of order s in other words assume this so you are assuming that you have propagation of L1 exponentially weighted bounds you are in the case of the non-cut of the homogeneous case and you are assuming that initially f0 is bounded by a certain exponential function okay so what we prove is that at all later times f which is in L infinity norm in velocity weighted by the exponential of the same order s by a different constant rate alpha is bounded by L1 norm by this L1 norm so if you think about this you need as an ingredient propagation of L1 norm so we really do it in the context where we have propagation of L1 norms so if you have propagation of L1 norms then initially if you have nice solution in the non-cut of case initially if you are bounded by exponential of certain order you stay bounded by exponential of maybe slightly different exponential but of the same order okay and now few things what do I want to tell you I have six minutes okay so I want to tell a little bit about the proof so it's contradiction setup as in the case of Sylvester so we are modifying his proof we want to split Q in the right way and the tools that we use are certain changes of variables related to Carleman representation and to cancellation and L1 propagation okay so okay so let me see so the quantity you want to prove really that M which is F divided by I'm here putting just the proof with M with the Maxwell calling this L infinity exponential where I'm saying that M is e to the minus alpha v to the s we want to prove this is bounded by the right-hand side okay so assume inspired by Sylvester's contradiction argument we said first initially you know that that is happening and because of the continuity you know that there is an interval of time and that is the case and assume that T0 is the first time where the inequality fails and what we need and this is related to definition of nice solution let v be the corresponding velocity where the maximum is achieved so as we pause at this moment you can ask how do you know that this maximum is achieved because here it's not just F in L infinity which somehow Sylvester knew because there are certain results that tell you that F is in the Schwartz class here we are talking about F being bounded by certain exponential and how do you know I mean you do not necessarily know this so we assume that our solution does that and we pause as an open question construction of those solutions so then you can easily show if at T0 at T0 and at velocity v0 this is first time achieved while originally there was inequality that tells you certain I do not want to lead us through the details that give you certain lower bound on the change on F at T0 v0 on the other hand you have equation that tells you that change of F is the operator Q so if by using the equation you can get an upper bound which contradicts this we would be in business and that's exactly what we do we work now hard to get an upper bound on delta TF using the equation we get a contradiction so how do we do that again non-cut of case gain minus loss is out of question so we cannot do that so instead we decompose Q as Sylvester did into Q1 plus Q2 and this goes back if you look at this decomposition there is a certain the composition known as the cancellation lemma that goes back to Alexandra, Deville, Villani and Van Berg and many other people use it afterwards but for us that is not enough we have to have a more refined decomposition so what we do we keep Q2 as it was but we decompose Q1 into Q11 and Q12 in a fashion which is suitable suitable for working with an exponential weight so there is this basically you see that there is a certain difference of the exponentials which calls for something like a mean value theorem and then there are these specifics F's and M's where you know that maximum is achieved so we have to modify this decomposition to be able to accommodate the weight that we have okay and after that okay so what happens is that you can easily see that Q1 is negative just by assuming that maximum is achieved at T0 and V0 if you know that this is the maximum so here sorry this one is maximum so it's here so you know that this is negative then you need to work to show how negative that is with respect to these other terms and we use really tools that are inspired by what Lucy Lester did really coming from the integral differential equations okay so I will not tell you anything else about the proof so actually we present the paper is about to appear on archive and we present the result we notice that we could do it why did I insist if you have a propagation of L1 bounds then you get propagation of L1 bounds is because really we are thinking about distance and sort of enhancement you have propagation of L1 you get weighted you get propagation of L infinity so we are presenting results so that we can allow more general weights than just the weight that I wrote and we can allow weights to be either exponential functions or metaglacklar okay and let me finish by saying a possible further questions related to this is improve our L infinity exponential result by removing the assumption that F is a nice solution in the grads cut-off case you know that if you start smaller than Maxwell Gaussian then you stay smaller than Gaussian that's Gambo-Pamperon-Villani can you do something like that in a non-cut-off for more generic solution that what we have and it's my understanding that Bob and Maya are looking into that okay and then you can also ask about exponential moments exponential moments for Maxwell molecules the case when gamma is 0 you can go even further you can consider soft potentials for soft potentials it's actually not clear what the situation should be, should they propagate because the singularity of you also gets into the picture so it's even in the cut-off case it's not quite sure what the result should be okay and there are people who studied connections between exponential moments and convergence to equilibrium maybe something like that can be also done in this context but those are just questions to not to be bored okay and thank you for your attention we have questions comments do you anticipate any major difficulties if you're trying to prove similar result for the non-specialy homogeneous case so the non-specialy homogeneous is a while that it's my understanding I have not worked on that but it's actually not even sure as far as I know there are no works on exponential moments for the non-homogeneous case I mean there are many there are other results in the non-homogeneous case related to well-posed and so on but as far as I know exponential moments were not people were not I think there are difficulties it's much more difficult case but I cannot tell you exactly what the difficulties would be I haven't studied it yet what is the indication of what is the best s you can hope to get in all of these propagation results so this is what we these are the limitations of the techniques what's happening guys this too is something that that's in the cut-off case happening but in the non-cut-off I'm not sure really the maximum you can use is equal to I'm not aware of any sort of theoristics that's a good question it's not clear because generation goes to gamma and then what we have is really like interpolation between the cut-off and non-cut-off but that's an excellent question I don't know what should it be everyone in this situation of exponential way decimates how far are we from uniqueness of solutions assuming that you have a solution with exponential weight how far are you to improving uniqueness of the Kushi problem in the class among the exponentially weighted I do not know I do not know uniqueness in the non-cut-off you're talking about non-cut-off case I have not looked at it I do not know but it's a good crime because I mean since you have that kind of stability properties and you ask for existence that's a very natural thing but I don't know how far I'll just picture again thank you