 It is usually time for other things and not for hurrying at some amount. When I was invited to come here I started to think, looking at the title of the workshop of the school, that incidentally I have something to talk about which is no local, which is involved with probability, which involved some PDEs. And so why not film. So this is part of a serious of work that I started to write alone. Since when you are old, then you begin to think that it's better to do what you like. When I started this job, I am seeing around the 70s in the last century. It's a long time ago. Stajem v informaciju teoriju. To je nekaj, da je izgleda. Zdaj, da imam pletih tudi, vsega borokrasija, kaj je, da imamo tudi tudi tudi tudi tudi tudi tudi tudi. Tako, imam tudi tudi, kako se izgleda, kako se izgleda, kako se izgleda, kako se izgleda, kako se izgleda, kako se izgleda, in tudi bolo, da izgleda čekaj. Pojo v tem, da imamo tudi tudi tudi tudi. I Care ...jaj nekaj nekaj ne tega. Vse nekaj ne potrebilo, ... ...zupaj načo je, da je delato in nekaj nekaj ne zavodil, ... ...zapravil, če je ena konceptu pravšnjela informacije. Če je začega pa se zelo v glasbenju dobro. Nažal sem nekaj nekaj, da je pravšnja kalkulosne. Mi je tudi spolj, da... ...ošel sem, da pokazal, da je Pabre... ...va Luis Caffarella, Kvanoves Vázquez ... ...zapravšnji mediljno svoj vsega trafšnja. oga oga, jih je v njegroj, da samo jou. Pa boš nekočen naši vse Bashar, da bi se vid jobsite jaz, in zdačjo se ne Mislim, što so to privatizacije, ko bo kot nekaj, z vsej je vsega, zelo, je, je, je, je sreč doma, je, je, je, je, je, je, je. Zelo, čisto je pravda, da je zelo. Tukaj je način, ko je površena, zelo tukaj je zelo zelo. Na drugim pojtem tukaj spremam, začasno zelo, če moj tukaj je začasno. Trzebno vso argument je, da je tukaj nekaj logaritmik menej tukaj nekaj kvalitetin si so izgleda. To je po učin, kaj je nekaj neč začasno, in maybe it will deserve further investigations. And then why these, since any time that you know that there is a logarithmic sub-refin equality, then you can improve the results, you can prove that one converges and so on, that was not known in this context. And then at the end I will put a list of my works on this. So the starting point is the concept of Shannon entropy. Unlikely, I don't speak about Boltzmann's functional, but since people working in information theory started from the fundamental two papers by Claude Shannon, they called this Shannon entropy. So since if you want to publish on journal, so for example I triple E on information theory, so you have to use this notation, so I will use this notation again, but in any case, you know, is the integral of minus F log F. More or less the difference with respect to the H function is that I take minus and so I will see the increasing of entropy. This is nothing more. And the entropy satisfies a very nice inequality that was postulated by Shannon and then proved more than 10 years later by Stamm in his PhD thesis. And if you take two independent random vector and then what you can prove is that the entropy of the sum, or if you prefer, the entropy of the convolution of the two densities in dimension n, here n is the dimension of the space in which you are working, is bigger or equal than the sum of the two entropy powers. This is a nice inequality and I will explain why. And the reason is that you can saturate this inequality in a very easy way. You take a Gaussian random vector, you take a Gaussian density and then you take the covariance which is sigma times the identity and then you plug in the entropy power and when you evaluate the entropy power, which is this quantity, you see that this is linear in the variance. And so the entropy power inequality for the sum of two Gaussians simply says that A plus B is equal to A plus B which is not difficult to prove. Now, how to prove this is a completely different story. It's very nice to look at the whole history since, you know, F log F is a bad function to work with. But there is another interesting function which is quadratic form, which is easier to work with, which is the Fisher information. The Fisher information is defined as the integral of the grandeur of F square over F of X in the X and there is a nice bound between the entropy and Fisher information that I will explain later on. So why this is interesting in what I will prove. For this simple reason, there are strong connections between the entropy power inequality and the central limit theorem, in probability theorem. So you take what says the central limit theorem. Here I take independent identically distributed random variable of zero mean, I forgot to say, but in any case they are centered. And so you take the sum and then you divide by square root of N and then you apply the entropy power inequality with two variables and immediately you fall down that H of S2 is bigger than H of S1. And then you can repeat this idea proving that the entropy is increasing along the subsequence S with the index 2 to the power k. So the entropy power inequality seems to give an entropic proof of the central limit theorem. And this has a lot of connection with, let's say, with Boltzmann theory. So in this case if you, it's easy to verify that the sequence S sub N is such that if you take the random variable centered of a certain mass, mean and variance are preserved. In other words, you have a sequence with all the conservations of the solution of the Boltzmann equation, which as you learned, I saw this morning, Lorande Villet is speaking about, you have relaxation to equilibrium in a situation in which the solution of the equation preserves mass, momentum and energy. And in which way you prove convergence to equilibrium, you prove convergence to equilibrium using the fact that the entropy is monotonic and increasing. And so you are trying to think that this still holds, this is a very old problem, by the way. The first time that I touched this problem was reading a paper by Baron, I think in maybe 75 or something like that. So it's a very old problem, but interestingly enough it was very difficult to prove this conjecture. So for example it was very difficult to pass from 2 to 3. And only in 2002 Einstein, Baal, Barton or succeeded. And then there are many proofs, once the door has been opened and then simpler proof came, and so for example Mademon and Barton in 2007 gave a really nice proof, very simple, to understand why this phenomenon works. And once again they used fissure information bounds. So in a sense what you are doing with entropy is really easier if you work with fissure information. In kinetic theory you know, or at least that there are situations in which you know that there is a decay towards the equilibrium with a certain rate. In this case there is a decay rate of h of a cent towards the h of the Gaussian, the Hermadeski, which is the upper bound. And what is important is quantified in this gap, and so people started to work on these and there are interesting results for log concave densities. In a sense log concave densities are very close to the Gaussian densities and so you have more instruments to work with. Now the relationship between the entropy and fissure information still is another thing that is well known. You take the heat equation, so for example take k equal 1, and then compute the evolution in time of the entropy functional and of the derivative of the entropy function with respect to time. Stamm used this idea in 59, but the funny story is that in 65 there is a nice paper by Henry McKin which is concerned with caricature of a Maxwell gas in which he studied the evolution of the derivatives of the functional that one can derive from the entropy functional applied to the heat equation. For the simple reason that he wanted to prove that time there were a conjector, completely false, that if you take the subsequent derivatives of the h function of the alternating sign and so it succeeded up to the order 2 and so evaluated the fissure information which is the derivative of u, then this j of f, which is minus 1 up of this derivative at the 2 order this is verified, but then it fails. And now we know that for concave density you can go up to the third order, but no more. And so here there is, taking subsequent derivative the situation becomes more and more intricate, but in any case everything is done in terms of the subsequent derivative of the logarithm of f. And the first derivative and the second derivative, let's say the fissure information, the second derivative are related and there is another interesting inequality that coupled them. So it seems that any time that you take the heat equation you take subsequent derivatives of the h function and then you recover interesting inequalities. So why fissure information is so nice? Fissure information is so nice for the simple reason that when you take the fissure information of the sum, the fissure information of the sum is less or equal than the fissure information of both multiplied by 2 constant with the sum equal to, by the square of 2 constant which squares with sum equal to 1 which is given here. And if you optimize with respect to this constant, then you get that one over the fissure information of x plus y is bigger or equal than one. So this is called black man's time inequality and this is the corresponding of the entropy power inequality. Since if you plug inside two Gaussian densities, one over i of x for a Gaussian density is still linear and so it is another time a plus b equal a plus b. This is what we have and let me spend 5 minutes to perceive the consequences of this that seems again. Now taking the entropy power inequality, one of the two random variable like a normal density and then multiply by 2 times t. And then you use the entropy power inequality and then you know that this is linear and the value is 4 pi e by the definition of the entropy power corresponding to the normal density. So put in this way, divide by t and this is bigger or equal of 4 pi a and then take t to 0 and so you get this inequality. This inequality relates the entropy power information and it's very powerful. So for example it implies logarithmic sobolev inequality with a remainder which is very easy to derive. So I wrote a short paper saying that one of the consequences that the entropy power inequality includes the logarithmic sobolev inequality but with a remainder which is still stronger and this is nice. So it's nice, it's surprising since in 75 Blackman completed the proof of time and so the logarithmic sobolev inequality was known at least in this form 10 years before of the paper by Gross. And if you do the same on the Blackman's time inequality still you plug the sum of x and then you take one of the two addons which is normal and so you obtain the same this is linear and then passing to the limit this other inequality which relays the first derivative to the second one. And this is a simple proof of what cost approved with a long proof in 1985 saying that this functional is concaving T. You take and the linearity is achieved only when you are Gaussian so plenty of inequality and this can be generalized in entropy and then to non-linear diffusion equation obtaining as consequence not the logarithmic sobolev inequality but Gallardo-Nierenberg inequalities and this has been done by myself and several years ago. So this is the picture in the nice case in the case in which the object is the heat equation in which there is the normal density and this is what we know we know almost all in any time that you know that there exist something like logarithmic sobolev inequality you can pass weak result to L1 result and so on. What was not so known is the situation in the case of the central limit theorem for stable laws. What says the central limit theorem for stable laws? You take the sum as before but instead of dividing by one half you divide by over lambda. When lambda is a number in our case between 1 and 2. The number 2 corresponds to the Gaussian case and what is known is that if you start close close to a levisimetry stable variable so you are in the so-called domain of attraction of the stable variable then the law of tn converges towards the law of zeta lambda weekly. A levisimetry stable law you define in Fourier is a to the minus c to the power lambda I normalize to be clear when lambda is equal to you have the Gaussian when lambda is less than 2 the big problem is that a levis stable law has no second moment is a there is no representation in the physical space but in any case it is known that you have only a certain number of moments which are bounded so it has tails and this is the reason why you have to choose an initial random variable with more or less as the same tail at infinity. This is the analogous of the Boltzmann equation for dissipative for dissipative collision which has been well studied. So the Gaussian density is related to the heat equation and interestingly enough linear fractional diffusion equation I will be more precise later on. So take in the central limit theorem you have the monotonicity of the sum and so the main idea is to generalize the idea of using entropies to the setting of the central limit theorem for stable laws and so let us revisit the idea that led to this proof of monotonicity made by Einstein and Kuhorker and then I will use more or less the same idea which is in Maddim and Barron in 2007. In theoretical statistics sorry for this plenty of notion but there is an interesting concept which is called the score. The score so one takes an observation then one takes the logarithm of observation and one takes the variation with respect to some translation more or less and what happens is that at the point t equals 0 the score is defined by the I am in one dimension to be clear if not there is the gradient here. In one dimension the score of a certain random variable with density is a prime of x over f of x and if you try you see that the linear score has zero mean but the variance the variance is the feature information. So the feature information is the variance of the score of a random variable and if you have two random variable with differentiable density function then you can consider the relative score of one with respect to the other by taking the difference between the two scores z prime of x over f of x minus g prime of x and this is the relative to x feature information is the variance. Now, the linear score as this interesting property that is linear if and only if the random variable is a Gaussian a normal density is defined by linear score which is minus the z divided by the variance. So, among all probability distributions there is only one which has a linear score which is the Gaussian and the relative score with respect to the Gaussian then you write like the linear score f prime over f plus x over sigma and for the reason that this is linear you can use the same inequalities with or without this part so Blackman's time inequality still holds if you have addition of a linear part all the other inequalities hold if you add a linear part this is a peculiarity of the fact that the Gaussian has a linear part of the score and the feature information the relative feature information is written in this way you take the score then you take x over sigma which is the score of the Gaussian you plug to the square and you multiply by the density f of x and if you write in this way you see that the relative feature information is bigger or equal than zero and is equal to zero if and only if x is the centered Gaussian variable of variance sigma and now if you think in this way taking the relative feature information nothing is against to extend the same concept to cover fractional derivatives in other words you take a random variable which is distributed with a certain probability function and support this probability function as a well defined fractional derivatives for the alpha and then instead of considering the linear score let us consider a linear fractional score let's say the derivative the fractional derivative for the f of x over f of x this is the same concept and as I said before I learned about fractional calculus after the reading of a paper of your with one with Vasquez and so I took the definition from your appendix which I've given here you take f of x convoluted with this kernel and then with a constant which is such that when you pass Fourier transform you can write in this way and some sort of miracle appears that if you define the fractional derivative in the usual way and then you write in Fourier variable like this a difference from the classical case when you consider the fractional score of x this is linear if and only if x is the distribution of order alpha plus 1 so this is a direct transposition of the idea of using linear score and Gaussian which are characterized by a linear score with a fractional score and the Levy density which possess a linear fractional score why this but it's very simple you want to identify when rho alpha plus 1 is linear which is the same that the fractional derivative of f of x is minus cx f of x and then use Fourier transform and the second the right hand side gives you the derivative of xi with respect to xi and the right hand side gives you this part and then you integrate with respect to f the logarithm of f satisfies a certain relationship so it's very easy why not defining the fractional score and only the relative fractional score for the simple reason that if you take a Levy variable the second moment is unbounded so you cannot consider only the fractional score let's say the relative one since this concept since the fractional score is a Levy density you have to take absolutely the difference between to them and to plug in if you discover this it's not true that this is bounded it's always unbounded so in other words with this concept this concept as a meaning only if you are in a neighborhood in a subset there is a fraction of the Levy stable law so it's a nice concept sorry let us assume what I did up to now the fractional feature information is always greater or equal than zero and is equal to zero if and only if is a Levy symmetry stable distribution of order lambda and what happens a difference with the relative standard feature information the lambda is well defined as a probability density which is suitable close to the Levy stable law typically lies in a subset of the domain of attraction I don't know if the domain of attraction or the fractional feature information coincides with the domain of attraction this is another open problem I tried many time but I did not succeed in any case the concept of fractional score you can generalize simply changing the denominator here you change the domain to take the fractional score instead this defines another Levy density with a different coefficient in front of the Fourier transform and let us call this nu and I plug here nu ilamda nu is equal one ilamda one is equal to ilamda ilamda nu is equal to this I don't know if it's easy to follow but in any case it's a definition and what can be proved is this one I will skip all the details of proof clearly but you can find on papers I think it's enough even without details take two random variables and suppose I have two smooth densities and row one, row two denote the fractional scores and then take a number ilamda between one and two and the positive constant delta between zero and one and then you can express the relative score fractional score of the sum in simple in this way like the conditional expectation of the sum of two quantities delta times the first with ilamda delta one minus delta the second score with ilamda one minus delta with respect to which one plus six two is equal to x if you look at the proof it's very simple and it's a universal proof it depends on the fact that you are using a fractional score on the score it's simply an easy consequence of the definition and why this is interesting it's interesting for the simple reason that when you take the Fisher information the fractional Fisher information the fractional Fisher information is the square is the variance but then you can use the mean of row square is less or equal than it's written simply here because it's worth inequality delta square ilamda delta plus one minus delta square ilamda one minus delta and then there is a scaling property which is funny when you have v here and v to one minus lambda here you can put outside v to minus two over one minus one over lambda not that then when lambda is equal to so we are in the equation setting this is v to minus one but in all the other cases you have a sensible improvement and the sensible is written here take a sum of two independent random variables take the fractional Fisher information and then consider this quantity this quantity is bounded and then when you plug here in the right constant delta to one over lambda one minus delta to one over lambda is less or equal than this quantity here so when you take lambda equal to which is the Gaussian case this is the sum ilamda of x1 plus one minus delta ilamda of x2 equality equality when both xj x1 and x2 are maybe variable the exponent lambda this is the corresponding variable that indicated to you the Fisher information of the weighted sum is less or equal than the sum of the two Fisher information but once I have a black man's sum inequality then I can prove more since I have another argument that so it goes back to 1948 so the same year in which Shannon postulated is inequality it has to do with the so-called used statistics to find it but in any case more or less it says this simple fact when you have a number n of random variables which are independent suppose that instead of knowing the whole set the whole phenomenon you know only sub phenomena of size n instead of knowing from one to n you know only from one to n minus two for all and then suppose that you have a function that is called here phi which ranges on this subset of the certain length then when you take the mean of the square this is less or equal than m over n of a to the phi to the square so you have some sort of reduction of the second moment and then you can use this this is a fundamental improving monotonicity of the Fisher information you apply what is called variance drop inequality of healthy to the relative score and then you prove simply d that you take tn which is the sum of the random variable to n, you divide by when over lambda and then you prove that the lambda Fisher information of tn is decreasing and you have this bound n minus one over n to two minus in this setting you have a rate we have a rate which is two minus lambda over lambda when you plug here lambda equal to you only have monotonicity so this gives a big difference which more or less is expressed here you have some sort of convergence of the sum in lambda tn converging to zero means at a certain rate means that the density is converging in relative Fisher information in density and then you have the rate and this makes a strong difference between the classical central limit theorem and the central limit theorem for stable laws in the first case you have a very large domain of attraction it's enough that a random variable is the second moment that is bounded but you have a very low convergence in relative Fisher so only monotonist is guaranteed if you take a very restricted domain of attraction in this case attraction information is very strong so you have the rate and when I got this result the first thing that I started to think about is the domain of convergence of the relative Fisher information is empty maybe it's a nice concept but there are no densities which are inside and it happens so it's a nice concept which is completely useless and so I tried to find an example of a function which is inside and if you look at the first paper in which I derived this you take the representative of this function which is called a linear distribution it has no expression in the Fourier space but in physical space Fourier space is 1 over 1 plus x to the power lambda is some sort you cut a to the minus lambda at some point and then you got and this is a good probability density for all lambda but when lambda is bigger than 1 you know that this is in L1 of R and then when you apply the inversion term you know that P lambda is really a probability density function so you can derive the relative fractional Fisher information and it can be proven that there are computations you have a concept which is in the physical space and you know property which are in the Fourier space it's not obvious that you can pass from one to the other so it requires a certain amount of computation but at the end I succeeded to prove that the set of probability density function which are inside is not empty and so it was happy maybe not too much seems at that time so it was happy but what means converges in fractional Fisher information does it mean converges in L1 I don't know and this is the reason why I started to think about since in the classical case one can pass from Fisher to the relative entropy by means of the entropy power inequality here the entropy power inequality is not known but can we save something and so I started to think about and instead of using the classical Fokker-Plank equation so I don't know if it is well known but if you want to prove the logarithmic sobolev inequality you can use exactly the entropy entropy method so take the derivative of the relative entropy with respect to the Fisher with respect to the Fokker-Plank equation that take the derivative of the derivative let's say the entropy the derivative of the entropy production and then you establish a relationship between the two and then you prove the logarithmic sobolev inequality so the first attempt to do was to let's take this equation in which here instead of the real derivative there is the fractional derivative of order lambda-1 and if you note this is exactly if you divide by F you have the fractional so the fractional score so essentially the Fokker-Plank equation is the derivative of F time the score if you take the Fokker-Plank equation the relative score and this the derivative with respect to X of F time the fractional score in the case of the fractional Fisher information and this equation since this kills out zero only when you are in a levi distribution let us call omega of X this levi distribution this is the stationary solution of this Fokker-Plank equation so this fractional Fokker-Plank equation a levi distribution like a fractional solution and so like doing the same computation that you are doing in the classical case you fold down saying that if X is random variable which is given by the sum of two y and the other one is zeta lambda and then they are multiplied by two function of time and the alpha of t is a to the minus t to lambda beta of t is minus t to one over lambda they satisfy this relationship these are the density which is the solution of the fractional Fokker-Plank equation and if you put lambda equal to then you have the exact formula for the real solution of the Fokker-Plank equation so the solution of the Fokker-Plank equation is obtained the density is obtained as a convolution of two densities interpolate continuously between the initial datum and the Gaussian in this case you start from an initial datum interpolate with the levi distribution and you can write the solution in this way the variable Xt has this density and f of Xt solves the fractional Fokker-Plank equation and similar to the what I said before I don't insist too much time is running write this in Fourier and if you write in Fourier and you integrate along characteristic you see exactly this is what you find in Fourier and then you repass to the physical space and you see that the solution that I wrote interestingly enough the levi density is invariant or under this scaling second step use the relative to zeta lambda entropy which is written here and try to do the derivative knowing that f of Xt is the solution to the fractional Fokker-Plank and what happens if you take an initial density relative entropy is finite then taking f of Xt like the solution of the fractional Fokker-Plank equation the relative entropy is monotonically decreasing and it comes easily from the formula of the solution but interestingly enough if the density belongs to the domain of our normal attraction then as time goes to infinity h of Ft over omega goes to zero so this is a result on the Fokker-Plank equation from one side you start with a density which is in the domain of attraction of the levi distribution you prove that this without rate converges to zero only one line of computation you take this equation you write the Fokker-Plank equation in this way this is plus X over lambda X over lambda is exactly the fractional score of the levi distribution simply the difference between this is the fractional score and then you take the derivative the derivative it can be written in this way and this way we know that this is decreasing and so this is a sign and this is a positive function and then you apply Cauchy-Schwarz you apply Cauchy-Schwarz lambda of F is less or equal then let's say here this is the composition of two parts this is the linear score this is the fractional linear score between F and omega when I use Cauchy-Schwarz I get the usual Fisher information times the fractional Fisher information to one-half but then on the fractional Fisher information I use the fact that I know this inequality and then I kill one of the two since one of the two is omega X2 is omega and the relative fractional Fisher information is zero and so I get that lambda of F of t is less or equal to lambda of X of t let's say half of t to the square lambda of t is a to the minus t to the lambda so this gives the decay of the Fisher information from the fractional Fisher information of the initial data and then I use another fact that alpha of t to the power lambda beta of t to the power lambda the maximum is bigger or equal to one-half and so I prove the lower bound on the E of X t which is the classical Fisher information is less or equal to the power 2 over lambda minimum between E of Y and E of Zeta lambda it is well known that there are papers for example by Bobkov and author in which decision evaluate more or less its value and so what happens is that entropy production is less or equal than a function which is decaying exponentially time a constant which depends only on the initial data and on the Fisher information of the Gaussian and if you follow with me up to now then you integrate and from zero to infinity the relative entropy converges to zero and then you obtain this inequality let's say that the relative entropy between X and Zeta lambda is less or equal than some constant here which is related to X and Zeta lambda so it's not exactly a constant since there is dependence on X times the fractional Fisher information to one-half and so anytime that I know the decay of the lambda of X and I know that this is uniformly bounded that I know the decay of the entropy function this is some sort of logarithmic of inequality it kills down only when X is equal to Z lambda and if you do the same method for the classical Fokker Planck equation then you recover exactly the so in the case of the classical Fisher information you get the derivative with respect to time of A of H of T exactly equal to the relative entropy and then using the decay of the relative Fisher information then you obtain the logarithmic of inequality and so finally let us go back to the normalized sum sum over one over n to one over lambda and then suppose that initially you have bounded Fisher information and so you can pass to the decay of the relative entropy once you know that this is uniformly bounded since this we know that it decays lambda of Tn is less than one n to two minus lambda over two lambda times how to prove that if of Tn is uniformly bounded it requires a further investigation but it can be done and suppose that to take probability density function that belongs to the domain of normal attraction of a relative symmetry random variable suppose that one is less than because that one is less than lambda equal to and assume that the Fourier transform of F satisfies a sort of condition like this which is not too heavy by the way then you know that for all n starting from a certain bound which is m over two fn is in hk and in addition this condition holds when m equal to when f still is in hk with m bigger than 2k over one plus epsilon if f times this is written here so it's a technical condition in any case depending on the Fourier transform of the initial value clearly what you can do is to plug inside the linear density in Fourier and to verify that for example all the conditions of this theorem are verified for linear distribution so it's a very and then you conclude the Fisher information on the sum of tn is less or equal than g and then you prove this theorem if you are in a subset of the domain of attraction which has bounded fractional Fisher information then the relative entropy to the levy density is less or equal than some constant which depends on x which is the initial random variable time one over n to some power times the initial Fisher information and this more or less closes the picture since then you can pass from here once you know that you are in L1 you know that your initial density is in some hk then you can interpolate and to prove results in any at least homogenous subverse cases dotted k of r knowing also the rate now I don't know if the constant that is the best in any case two things this is new at least the way in which you are looking to the central linear for stable loads and then is new the fact that you can prove convergence in a lot of spaces since almost all the result on stable density relates to the Russian school then the problem has been discarded we cannot find more in the sense of distribution there is nothing about strong convergence I think that so I was running and I will end up but in any case hopefully someone will add something to me so the idea is that you can pass from the classical central limit theorem for using the Fisher information to the fractional Fisher information which comes out using the same idea which has been used for the classical situation in the classical situation you use a linear score function which is a prime of x over x and this identifies the Gaussian random variable as the unique random variable in which the score is linear levisimetric stable loads are identified as the unique random variable in which the new fractional score is linear so this is actually a concrete part the fractional Fisher information that can be frequently used to bound the relative to the stable low Shannon entropy through an inequality which is similar to the classical logarithms of the inequality and so you can use all the machinery which is done once you have something like the logarithms of the inequality to obtain explicit rate of convergence in convergence in L1 at explicit rate and force move density convergence in variable sobole space still with rate this is my state of the art and these these are the paper in which essentially the first one contains some sort of short history of the entropy power inequality an improvement in the case of dot concave densities I put inside it was the starting point for studying this type of things and then there is a paper which is about the fractional Fisher information and then this paper in journal of statistical physics in which this sort of logarithmic sobole inequality has been derived explaining why the fractional Fisher information is a good instrument to prove it and then I try to understand the role of the score function also in case in which you have a nonlinear diffusion and this you can find here this is the history if you remember at least two or three of the definition that I gave I will be happy thank you