 O yo, se sente meio. Si, má, no, qua se sente, oya. Se sente, non se sente. Si, se sente. Ok. So, the last talk of this afternoon session will be given by Giuseppe Minjone in Spanish, Rosario, who has been working for years in non-linear elliptic and parabolic equations in Calderon Sigmund theory e he will explain here the relation of these tools with fractional operators. Ok, thanks a lot Juan Luis. It's a pleasure to be here and I'd like to thank the organizers for inviting me. As you notice, the title has lightly changed because there's now something added in parentheses, that is, and vice-versa. Actually because what I'm planning to do in this talk is first, ok, it's actually describing a double-playing interaction because the usual interaction we have seen over the last years where there was somehow an explosion of fractional problems has been taking non-linear and even linear methods from local problems and we have very often seen translation and far-reaching extension to the non-local setting and sometimes things can be quite different as we are going to see today but also I'd like to discover a wide range of methods that are actually non-local methods methods that can be actually applied to local classical problems. Classical, non-linear local problems. So what I'm planning to do is to show a double-interplay to see how non-linear classical methods can be applied to the fractional setting but also how let's say, some fractional methods can be applied to non-linear methods. Ok, as far as fractional results are concerned I'll be presenting some results with Tomo Coussi and Yannick Sear, fractional results and these are essentially in the first part where we extend non-linear methods and non-linear ideas to the fractional setting to solve some fractional problems while I will also present some other results where fractional methods where actually non-local methods can be applied to do some non-local problems and ok, as far as the first part is concerned let me start from non-linear potential theory potential theory ok, what is non-linear potential theory about is actually a far-reaching extension of the classical linear potential theory the classical linear potential theory is dealing with solutions of these problems in Rn larger than 2 and it's essentially dealing it also in some how incorporates classical Calderon-Sigmund theory and the idea is that we want to study as much as possible the regularity properties of solutions and of their derivatives in terms of the regularity of the assigned data this is given ok, that's a classical way to do this and the classical way is going the representation formulas representation formulas are telling you that you can represent this via convolution with the classical Green's function and the classical Green's function is the following this is 1 over x minus y and minus 2 if n is larger than 2 otherwise it's minus log x if n is equal to 2 and this is up to a constant up to a universal constant once you have this then essentially everything is known concerning u and its derivatives as far as u is concerned you get the first estimate via risk potential I1 of x which is defined in the following way and this is the classical risk potential this is for n larger than 2 otherwise you always have this actually this representation and actually this estimate up to a constant of course via the I2 risk potential this is I2 via the I1 risk potential which is the following one now since the behavior the behavior of these operators is known in essentially all reasonable function spaces then you can reconstruct via this inequality at least for instance all the integrability properties in any reasonable function space like for instance rearrangement in variant function spaces the big spaces all each law ends whatever you like ok, this is the classical fact this is the classical world but the classical world rests on this representation formula which is very much linked to the to the fact that you have this equation and actually you have a linear equation this is the classical approach and if you want to push it further up to the secondary derivatives then this becomes the classical calderon signal theory where let's say young's type in equality are not sufficient anymore and then you go to cancellations and to singular integrals but anyway as I was telling before this is the linear world so you have a fundamental solution and this motivated for instance in the 60s or in the 70s a great deal of studies with all the things related to fundamental solutions and microlocal analysis and blah blah now what is no linear potential theory about is actually the idea that you want to extend as much as possible the validity of this approach without having fundamental solutions of course to the non-linear setting and in the non-linear setting we are considering quasi linear equations of this type because to me this isn't in the most general case a measure but let's say that we are just talking about apriori estimates because defining solutions for measure data it's a bit problematic you have to go to approximation methods but anyway the idea is that we talk about we treat this as a measure but let's imagine is a same thing in function and then we get suitable apriori estimates which is just depending on the fact that on the total variation of the measure that for simplicity we imagine to be fine ok so the first idea we want to extend this to the linear setting and the linear setting means that we are considering no linear Poisson type equations this means that this is a linear growth and the ellipticity is prescribed by saying that you have this control from below on the eigenvalues for every lambda in our n and furthermore you also prescribe that you have this control on the largest eigenvalue so this means that the ratio between the highest and the lowest eigenvalue of this of the matrix a of x is L o and u and this is ellipticity furthermore you want to go on and you want to consider possibly the generator operator so the Poisson type where the model case is the following one and you get this operator which is still uniformly elliptic because the ratio between the highest and the lowest eigenvalues is uniformly bounded but this time can be possibly degenerate and more generally we shall consider we shall consider the classical equations on the equations like this that are modeled on this case that are modeled on this case this means that now when we consider the growth conditions these ones these will be bounded by z to the p minus 1 which certainly happens for this special operator and the ellipticity is described in the following way here it's lambda square because you have to have something elliptic but now you have a degeneration factor which is this one of course these assumptions reduce to these for p equals to 2 as well as the Laplacian reduces to the classical as the P Laplacian reduces to the classical Laplacian for p equals to 2 no yes, sure thanks ok ok, this is the general setting so in the first part of the talk I will first give a summary of how we surprisingly can extend these two estimates to the general setting which is quite non-trivial there are some fundamental solutions that are the first side appears to be an absolutely an absolutely crucial way to get such bounds and then we see how this can be extended to the classical to the new fractional setting and this is the first of the papers that I'm talking about here written with Tuomo and then we will see how other classical non-linear tools can be extended to the fractional setting now I'm going to switch to slides ok, as I told you I'm going to switch to slides ok, first now the new the new goal the first goal is to extend these two equal this to the non-linear setting where I'm considering well you can keep in mind this special operator but all the results are actually holding for this most general class of ok, this is the first part first let me rewrite the risk potential in a way which is suitable to localize the estimates there is no cheating here, this is a total triviality I'm introducing here what is called the truncated risk potential which is just another way of writing the classical risk potential you make a usual you can do this in very many ways maybe the composition in annual life would work anyway, from now on this is just the risk potential we talk about risk potential and we think of this of the truncated or localized risk potential ok now the point is that if you look at this operator let's say the pilaplash and type operator you immediately see that when p is different than 2 for instance these estimates just cannot hold because they do not respect the scaling of the equation if you get an estimate that doesn't scale as the equation then the estimate is wrong because you do a scaling and you see that all solutions are trivial this means that the estimate cannot be true so in when p is different than 2 the standard orthodoxy in a linear potential theory is to use this both type potential this is nothing but the classical risk potential that now incorporates the deficit scaling of this equation because you see that when this case is like p-1 then you see that this case is like p-1 this case is like 1 so introduce so you want to introduce the scaling and then you create this new potential this for p equals to 2 not surprisingly reduces to the classical risk potential so the orthodoxy in a linear potential theory is that whenever you have a degenerate of radar and p is different than 2 then you go to this both potential and you try to adapt the results and the first very nice result is due to Kilpelainen e Mali then they proved that for this operator and actually for any operator satisfying these assumptions but even further actually then you can point wise estimate you by the both potential w1p plus this object the real point is this one because this is the localization object which is there which must be there when mu is equal to 0 which is there otherwise you let capital R goes to 0 this goes to 0 because it's an integral and then you get u less than or equal than u otherwise if you want to be on the whole Rn then you let capital goes to plus infinity and this object disappears and then you get the both potential this is done by a clever application of a variation of truncation methods of the geology in the setting of measure data problems ok now what about and this is essentially the analog of the first result then after this it remained an open problem to get the analog of the second result the analog of the second result is actually much more delicate just for instance for one reason that you can immediately see how do you get gradient estimator and a question like this you differentiate the equation you differentiate the equation this means that you have to handle with some quantities that are second derivatives now let's consider just the case p equal to 2 and let me consider the case when you have Poisson equation equal to mu then second derivatives do not exist because this is the classical failure of Calderon-Sigmund theory in the limiting case just for instance already when this is in R1 so therefore this means that the classical techniques cannot be applied because you do not have even a starting point you cannot differentiate the equation so this problem remained open for 20 years and just let me say that that the first result appeared in a paper of mine and essentially this tells you that there is no difference from the linear case to the non-linear case because here the real point is the jump from the linear case where you can use fundamental solutions to the non-linear case where fundamental solutions are not available or at least they are available but of course you cannot get general solutions via convolution you can always define fundamental solutions formally by solving something when the right hand side is a direct but then you don't do anything else with that ok actually I'll try to come back to this later on essentially when you are on the whole Rn when you have a suitable decayed infinity of the gradient which is satisfied then you get the usual estimate and this is a quite satisfying result because tells that there is no difference between the Laplacian and any other equation no linear operator ok what about p different and 2 the case p different and 2 then following the standard orthodox you look for a wolf potential wolf potentials are good because they can be anyway controlled via this inequality by iterations of risk potentials once you have this actually this is a point wise equivalence at least for p larger than 2 minus 1 over n the right hand side guy is called having magia potential and this is proved in a classical paper by having a magia from 71 anyway this is telling you that whenever you have wolf potentials your job is done because this can be controlled by risk potentials and the behavior of risk potentials is well known ok so let me go back to the case p different and 2 and the first result was achieved by Fran Duzer and myself and this tells you exactly what you would expect following the standard orthodoxy following the standard orthodoxy now you can point wise bound the gradient via the wolf potential and this is actually quite satisfying estimate because this allows you to get all the classical estimates that you get via fundamental solutions by wolf potentials this is this guy and it as you can see for p equals 2 to 2 this reduces to i1 so you get back this estimate and the previous estimate one but now let me make a twist because when you find the estimate you do not want to follow the standard orthodoxy that is what the society is telling you but you want to follow just the equation it's the equation that is telling you what you're talking about so what you're doing when you're doing Calderon-Sigmund theory what you're doing when you're doing Calderon-Sigmund theory you know that by Calderon-Sigmund theory and this is actually the Laplacian well when you have such an equation when you have such an equation and mu for instance belongs to LQ for Q larger than 1 then this implies that the second derivatives are in LQ as long as Q is not equal to plus infinity so this means that you are making a trait between the divergence operator and the derivative so you're making this trait and this belongs to LQ like this one now let's make, let's believe in the power of regularity and let's discuss what happens when you deal with the pilaplacian equation when you're dealing with the pilaplacian equation then you're dealing with such an equation and now let's make a trait let's treat this as it would be something which is let's say a derivative so let me decompose this as this and this is equal to this now the point is that you can always this is not an elliptic operator so you cannot get up your estimates but you can solve this and the classical solution tells you that you can bound let's say the derivatives of B via mu and the formula if you apply I1 to both sides I1 is an integration so it should cancel the divergence and this means that this is this is giving you that this should be less than or equal than the I1 risk potential for the equation you are just applying I1 to both sides I1 cancels the divergence and then you get formally this so the equation is telling you that also that okay the equation is telling you well I'm non-linear I'm a non-linear equation in the gradient but I'm a linear equation in the flux in the whole tensor so the equation is telling you that this is the estimate that should hold so actually also when you see different and two you should be able to bound the gradient point-wise via risk potential rather than wolf potential this appears at the first side as a crazy as a crazy heuristic argument but it's actually true because what the equation is telling you is exactly this and this was proved by Tomacoos and myself a few years ago so this result that tells that tells that actually holds for general operators and for more general than the pilaplashon for instance having any elliptic operator under the divergence form and this is a non-trivial work of Baroni well this equation this result allows you to in a certain sense linearize the whole theory of the pilaplashon because now whenever you want to prove on the gradient of the pilaplashon it can be proved exactly as it were the laplashon and this holds for general operators so this is a general result that in other words incorporates all the results all the regularity results given by the previous theories for instance the theories of bocado Galue for measure that a problems is implied by this the classical estimates by the Benedetto Manfredi and Ibanez are implied by this even the classical estimates by Atlantic can be recovered by this this holds for pilaplashon these holds for pilaplashon than 2-1 over n at the moment but there could be improvements for that yeah but let me concentrate just on the case pilaplashon or equal than 2 because yeah ok and essentially when you are on the whole rn you have the classical estimate and now holds for any general possible solution to this equation ok now let me go to the non-local non-linear potential theory and let me present a few extensions to this ok, the extensions to the non-local problem let me start step by step by the classical fractional aplashon which in a distribution way it's defined as this one so this is the analog of this equation so at the moment this is the analog of this equation and this is the nice equation that can be solved by a Fourier transform because essentially you have a kernel whose Fourier transform is perfectly known and then you can do whatever with this so this is the analog of this equation so you see the kernel is this object it's 1 over x minus yn plus 2 12 then we switch to a general kernel and now you still have something which is linear with respect to the solution but you lose the perfect shape of the kernel and you replace it by two bounds that replicates the growth this is the analog of this equation with measurable coefficients so essentially the first thing you would like to do is for instance extending the classical the Georgian-Ashmoser theory to such equations then we go to the quasi linear case we have now something which involves a measurable kernel as before but now incorporates a non-linearity with respect to the u-parable but this non-linearity is qualified because it's a monotone this is monotonicity essentially this is the ellipticity of the phi and these are the analog of these equations and then we go even further we take we go to the p-range and now we produce a ellipticity at the p-level and we take p-level and now this is the non-local analog of the classical p-laplation operator so we have produced the reproduction, the fractional version of the laplation then of linear equations with measurable coefficients non-linear equations with measurable coefficients and degenerate equations with measurable coefficients so these are the three cases essentially the prototype is the following one where the non-linearity is very much explicit in this way and this is essentially the operator on the left hand side is known as the fractional p-laplation and the fractional p-laplation essentially emerges when you want to minimize this normo in respect to this kernel so it's a natural object because you're minimizing the p-norm you're minimizing the p-norm so now you remember that the first estimate of Kitt-Pellang and Amalie tells you that you can locally bound this guy the w1 pu x r where w1p mu x r is equal to this plus the localization term the unavoidable localization term which is this one so this is the classical estimate of Kitt-Pellang and Amalie ok now we want to reproduce an estimate of this type for the fractional p-laplation and in particular we consider problems of this type and the natural assumptions that must be prescribed due to the non-locality of the operator because there are long-range interactions and essentially the boundary can be p-capacity zero in several cases right if p is too small and the natural space is the space where this object called the tail of the function is considered to be in this space so this is the essentially the right space because this is a quantity that is nothing fancy but it's the explicit quantity that emerges whenever you try to have energy estimates for this equation so you always have this quantity popping up everywhere so you prescribe that this quantity is fine otherwise you cannot deal with anything and this is the minimal condition for which you can handle your equation so this is the tail and essentially in fact there's a beautiful estimate by Dicaster, Guzi and Palatucci that tells you that if you have such a solution then you have the classical object plus the tail that takes into account the long-range interactions of the solution of all the points so it is obvious that our estimate if there's an estimate if there's a potential estimate so if there's a potential estimate the potential estimate must incorporate also the tail actually let me make a brief sketch of the regularity for this type of radar and the status for this non-linear operator there's a beautiful theory by Dicaster, Guzi and Palatucci that also extends previous theory by Casman where a full theory is presented and the superestimate, the helder continuity and even Arnaquia qualities are proved for solutions to these problems then there's a more reason a beautiful paper by Cotsi where he extends this in a variational setting so the good thing of this paper is that the paper is given for minimizes of functionals for minimizes of potential non-differentiable functionals because the functional f can be for instance just a helder continues so this means that you want to get regularity information not from the Ola Lagrange equation that could not exist at this stage but directly from minimality in other words, from minimality as in the classical case it is possible to derive a cacciopoli or energy type in equality so from these inequalities then you get regularity this is what Cotsi does and this is the reproduction of the classical Jacinta and Jussi's theory of regularity where they prove for instance that any minimizer of this function will just pick growth so without assuming that that there is that there is a Ola Lagrange equation our helder continues and locally helder continues involving also higher inequality and this is the classical approach and as far as the higher gradient theory is concerned this is very much open because differentiating something which is already fractional it turns out to be difficult there are a few results by Brasko and Lindgren but the higher order or the regularity is still an open issue ok let me just please you can also completely forget about this this slide because I'm just getting I'm just giving you the definition of solutions to measure data problems so essentially you have now the pilaplash and type operator on the left hand side you want to solve with a mu and you do it with approximation this is what you classically do in the local case in the local case when you have a measure data problem like for instance this one the classical issue is very simple you can first define very weak solutions or distributional solutions and these are just distributional solutions to this one you can always you can always define in the classical case and in the classical case you get that a solution a measure data problem a very weak solution solves in the weak form this identity if you are in omega so if you think of the pilaplash if you want to give sense to this where the distributional solution prescribes that this is sin infinity then you just require that this is L1 which is the same this is in LT1 so you are not in the natural energy space associated to the operator so these solutions are natural but these solutions are absolutely unminageable you cannot do anything with them because you cannot get energy estimates you cannot for instance test with something which is proportional to you so these are not good essentially no one is able to handle with them so what you do you define special classes of solutions so you do the most natural thing you smooth it out the measure in a way or in another so in a way that this is become for instance L infinity you consider approximating solutions and then you pass to the limit provided that you have good a priori estimates now you can have good a priori estimates because you can test this you can test this and then you can pass to the limit so the same thing is done in the local case in the no local case but you can just forget just forget and you can think that you have now the good definition in your pocket ok so the main result with Tuomo and Yannick is the following one you have exactly you have exactly the potential estimate that now incorporates exactly the term you are waiting for so you have the both potential bound which is now not n-p but n-p-alpha because it must respect the scaling of the equation you see that if alpha is very small then you have less regularity properties of your operator and in fact the potential becomes worse when alpha is very small you get a strong singularities and this reflects that you have less regularization properties of your operator when alpha gets larger you get better and better so you get the term you would expect the localization term plus the tail when mu is equal to zero this gives you back the superestimate before moreover this also involves a classical a criteria on fine regularities of solution because whenever you have that your boof potential is finite then you have a Lebesgue point this is classical in classical linear theory the real one of the first historical goals of linear potential theory was to study the fine properties of solutions of Poisson type equations via potentials now you know that potential is finite you have a precise representative and then you have the you can estimate the capacity of the singular sets and the fine properties and whatever essentially this is the result and this is the perfect analog of the result and you see the boof potential w alpha p is the follow ok so as far as the gradient theory this is still open so this is the first adaptation ok to the no local setting of a classical local no linear result so this follows the lines and extends in the way you expect something which is established in the no linear case now I will present by the way you can also for instance criterion that if you have that mu is in this special lowering space then the solution is continuous this is the borderline optimal space you would expect and so forth this is implied by our results for instance you have that if the boof potentials goes to zero uniformly then the solution is continuous this is essentially a by product of the previous proof also tells you that you have a precise representative if the boof potential is fine and if this goes to zero uniformly then this means that the measure is not spreading is spreading enough to guarantee the absence of singularities and the final continuity and so forth you can get estimates like this the definition of lowering space is this one they are interpolating spaces they are interpolating the big spaces and they are able to catch the final borderline results ok this was defining a first extension that let's say it's a first type of extension it replicates a local result now I would like to present a second fractional result that does not replicate a linear result essentially where there's a new phenomenon coming up and this is a non-local self-improving property the non-local self-improving property has something to do with the so-called estimate so my as estimates are the following one what can you tell about already for instance in the case p equals to two and the right hand side equal to zero what can you tell about the regularity of the gradient and the integrability of the gradient of an equation like this you take already when the right hand side is equal to zero and you start from an energy solutions now you are considering the case p equals to two in these assumptions in the assumptions before ok then the classical elca admires and Jacinta Modica's estimates are the following one these bootstraps into something which is better than L2 for some small delta depending only on the ellipticity constants of the equation this is the so-called higher integrability it's classically proved by now there's a better approach due to Gehring's lemma Gehring's lemma tells you the following assume that you have a so-called reverse order inequality reverse order inequality is the following one you take f just consider g to be zero at the moment the classical order inequality tells you that this inequality is true if q is larger than p now assume that q is less than p so you reverse the principle of elders inequality then this improves this self improves the integrability of the function getting you an other exponent for which you get better and better as long as the right hand side allows you which is a source term in this in this setting ok the point is that for these equations you have what is classically called the cacciopoli type inequality also called the reverse pointer inequality or energy estimate so you can bound for ellipticity the gradient by the solution itself so you can bound a higher-order object by a lower-order object and this is according to the principle that for instance for harmonic functions point-wise convergence puts trapping uniform convergence in all derivatives right so you can control higher-order objects with lower-order ones this you can prove under these assumptions and this implies the higher integrability of the gradient so let's see how this implies the higher integrability of the gradient this is a very simple inequality that you can easily obtain by testing with a simple possible choice of the equations so you take a cutoff and you multiply by u and you test it and then you get it in two lines ok so how do you get this one so you get the gradient can be estimated by this now apply point-carries inequality for the right hand side and you get what a reverse order inequality because now you can bound du squared by du squared to a lower power which is n over n plus 2 now you can conclude with the non-trivial Gering's Lemma getting you that you self-improve the gradient so it's a very simple fact so you get an elliptic equation you get a cachoppoli type inequality so you can control du by u but you can be controlled by a lower power of the gradient and then you self-improve concluding by Gering's Lemma ok now you can conclude the non-trivial Gering's Lemma ok so this is the improvement what can you tell about now the oscillations of the gradient you can actually tell nothing and this is the counter example take the simplest possible elliptic equation this is the solution take one over at to be measurable just measurable and you see that the gradient of the solution is one over at and so at can be as bad as you like and so there's no improvement whatsoever end of the story so in the local case you get an improvement of the integrability but no matter you can do you cannot get this and this happens where coefficients are bad bad coefficients ok now now we go to the now we want to extend this self-improving result to the non-local case and we go to this a very simple setting to the linear setting but with measurable coefficients so you just get a lower and upper bound on the kernel K and this doesn't allow you to use Fourier transform because you don't know exactly the specific shape of K so the analog ok what is the definition of energy solution you are in W alpha p alpha 2 I will not recall the definition so please allow me to not in this conference recall the definition so the natural definition you want to start with which is the natural analog of the energy solution which is in W1 2 in the local case is W alpha 2 and if you want to extend what you get in the linear case you are allowed to assume that you is in W alpha 2 plus delta for some delta so this is the Maya's property in this case and let me recall you that this is essentially the this norm ok this is essentially what you have because Bassenrand approved that this is true and essentially this relies on the classical scheme you get reversal, the inequality you get that this quantity is in L2 plus delta and this implies by a characterization of Stein and Strikert's that this is in W alpha 2 plus delta but the surprise is that now there is a new unexpected phenomenon is that what you can actually prove is that you do not only have a self-improvement of integral building but you have a self-improvement of differential building and this is one of the very few cases of a purely non-local of a purely non-local effect so this is not true in the local case so this is not the replication of the local result this is a new phenomenon and the proof of this is very adding on trivial it involves essentially 30 pages of hard harmonic analysis when I say hard harmonic analysis this means that you are not using the tools, the pre-made tools of harmonic analysis but you really have to do coverings and combinatorics and things like that so you have to do really things by hands so you have to use ideas from a harmonic analysis but not tools and there is no analogue of that later on Chicago was able to extend this by saying that if you prove if you go a bit below then you can go a bit further on there is a difference between this result essentially and you can get also this result by the methods by adapting some of our methods our methods start from energy in equality these methods require solutions in fact we do everything by hands Chicago in these very nice papers use delicate properties of commutators in fractional spaces all the things extend to the p to the p case ok, let me now present our approach and it is a fractional approach to Garin Glem the fractional approach has one advantage for instance if you want to consider minimizes or functionals for which you do not have Euler Lagrange equations then you can use this approach you cannot use the approach via testing because you do not have the equation for instance I think that this could follow in the realm of this paper of Kotzi ok so just to two slides, because the proof is ok ok, what is the starting point as I told you the starting point is this Cachoppoli type in equality in the local case then from this you get the reverse order in equality and then you conclude via Garin Glem ok, now the first our first approach is that we prove an energy type in equality you see on the left hand side you have the alpha derivative dimensionally on the right hand side you have the same thing before scaling proportionally and then you take a tail type term which takes into account the fact that the problem means it's on the whole Rn and involves the long range interactions between points and functions and then we prove that if you have this in equality then you have this so just this fractional Cachoppoli type in equality implies the higher differential build actually it is sufficient to prove higher differential build because then by embedding you know that from higher differential build you also get intermediate differential build and integrability ok, so what does it mean that you is in W12 being W12 means that trivially du is integrable with respect to a finite measure so the idea is that where is this additional differential build coming from now what does it mean that you belongs to Walpha2 this means that this object which is dimensionally speaking analysis to the gradient but you know that there is no local effect here well this is integrable against an infinite set function so this should improve this should tell you that this object is in a way better than the other so this is actually the source of this extra differential build because this object this object must be smaller than what you believe because it must be integrable against something which is non-finally integrable how this this connects from L1 this is a Marchinkiewicz factor it's a log factor so this means that if this is finite against something that blows up as a log this should be better of a log type but you whenever you get a log in higher integrability then you can improve and bootstrap to a power this is the first proof the first unknown proof of Gehring's lemma the first unknown proof was getting first a log integrability then you bootstrap to a power so this is the first suspect that should be something so the idea is the following one we make this this measure of finite but making this measure of finite means that we do n-epsilon and we do something here now this is finite against this so we lift the problem into rn times rn and we say that this function is finite against this measure so the measure is is more regular as the object too because you trade the blow up of the measure you trade it in the denominator but trading in the denominator means improving the French bill so we translate the Cachoppoli typing a quoting a reversal the inequality for the lifted function capital U we prove a version of Gehring's lemma for dual pairs for these new dual pairs we call them dual pairs in this sense then the higher integrability of capital U translate into a higher differentiability of U let's see why assume that U is in L2 plus log what does this mean? this, this, this means this so this is a way to exploit the thing and in fact what we prove is the following one the previous trend is that this, for this dual pair this inequality then this inequality itself improves now this at the first site might appear as a one of the many generalizations of the of the Gehring's lemma but in fact it is not why? because the tilde b which are the product of both this means that this is Rn times Rn while Gehring's lemma would require an information on every possible bowl here you only have bowls around the diagonal a complete loss of information outside so the idea is that you have a very difficult trade between the distance from the diagonal where you lose information and when you get information so the idea is that you have to quantify the distance from the diagonal if the distance is far enough then you are good because the kernel is not bad if you are closing up you have information and this makes the combinatorics very very delicate ok and now part 4 briefly, very briefly I'll go now in the other direction of the talk let's say how fractional methods and this is probably less known to the audience how fractional methods can be extended to local that can be used to get a local results so first of all a first example limiting Calderon-Sigmund theory as I told you if you have a measured data equation and now this is a local fact for instance you have this model equation what do you have the classical theories that tells you that the gradient is in this space and this is sharp this is already sharp for the case p equals to 2 otherwise it can be seen to be sharp if you see the classical non-linear grains function which is this one minus, ah sorry ok, this is the classical non-linear grains function ok now there's a gap because you know you cannot get that the gradient is in W11 this is once again the failure of Calderon-Sigmund theory in the limiting case so you get that there is a lack of theory why because you have a second order equation a second order equation is something that formally prescribes the value of second derivatives but now you have all the information about first derivatives so you would like to lift it up ok this was done for the case p equals to 2 by me several years ago where I proved that if the gradient cannot be in W11 this is anyway every sub of space before W11 so you almost lose nothing this is a a special technique that allows to get a sort of analogue of local little palette decomposition so this is for the gradient what happens when p is different in 2 well analyzing the behavior of the fundamental solution this would lead to the idea that the gradient must be in this space and in fact this is also true this is sharp because this embeds into a sub aller space if you take epsilon equal to 0 this would embed in a lebeck space where the fundamental solution is not doesn't belong to and essentially this paper lifted the whole theory up to the differentiability level so this is the first case of limiting Calderon-Sigmund theory but now essentially as I explained before as I explained before the whole essence of Calderon-Sigmund theory is that when you have a regularizing operator you can trade for instance in this case the more general case provided the regularity assumption on the vector field considered you can trade this with the divergence divergence with the gradient so this is formally telling you that the derivative of the whole the derivatives of the whole object should belong to a good sub aller space as in the case of the Laplacian so in the Laplacian you have that you can trade the divergence with the derivatives and this is what has been done in a recent paper by Abdelin Kuzin myself so we can essentially trading the divergence with the derivatives getting the full limiting regularity of let's say getting the limiting case of Calderon-Sigmund theory in the non-linear case this is a delicate result that couples with the previous result the previous result that is the potential estimate states that you can control essentially A of the U with I1 of mu X plus A of the U so it's all intrinsic so in the previous case you apply I1 to both sides and you get this estimate in this case you get the Calderon-Sigmund analog so this is the potential estimate analog while this is the Calderon-Sigmund analog ok and this comes along with the suitable catch-up polytype inequality which is explicit observed that in the case mu equal to 0 you get actually the derivatives of A and you can estimate ok and let me give the final example and this is probably surprising because actually we know from the initial paper of Caffarelli and Vassar and Kassmann that the fractional de Georgi technique has been usado to prove there's a fractional de Georgi technique used to prove regularity for solutions to fractional operators actually there's another paper where I introduced a similar fractional de Georgi technique to prove essentially regularity for solutions to local operators what is the difference the difference is the following one so I'll give the first proof of the gradient potential estimate for p equals to 2 I'll give the proof of this estimate and so this shows essentially the interplay I was talking about at the beginning of the talk so the interplay between fractional and local between not only results but also between methods ok this is the first proof of this result and is the following one ok how do you get a gradient estimate when mu is equal to 0 so how do you usually get the usual gradient estimate for a solution which is the following one the u is less than or equal than its average this is classical for harmonic functions ok you take a solution of this guy you differentiate the equation as was telling before so the gradient becomes the solution to a linearized equation with measurable coefficients and actually this is the core of the de Georgi's theory and then you get the du is bounded how do you get in turn a bound for du by writing a fractional catch up on the equality on level sets the second derivatives because now it's the gradient to be a solution can be bounded by first derivatives and then you can use subal efe embedding theorem to get a linear a non linear iteration eventually leading to the bound for the gradient so this is the classical approach now in the classical fractional problems for instance if we follow these these papers that I mentioned before you what you get you cannot get the higher differentiability you cannot get such a fractional catch up on inequality to three minutes it's almost done ok now the point is the following one so in the classical when you deal with fractional problems you don't have local derivatives so you cannot even write them here you have derivatives but you cannot once again write them not because the problem is no local but because the problem involves a measure if the problem involves a measure then this program fails from the very beginning because if this is a measure or even if this is in L1 then the second derivatives are not in L1 so how do you replace this method by recalling that this is the Gallardo norm of the function D what you get is the following you recover the theorem I was talking about before and actually what you do you prove a fractional catch up polytype inequality so you don't have so the idea is that although the problem is local there are no second derivatives because the right hand side is bad but there are fractional derivatives so you can write a fractional catch up polytype inequality where you bound not the second derivatives of the solutions but the fractional derivative of the solution plus the remaining part which is due to a non-trivial right hand side this is essentially the same approach that comes from the papers that I mentioned before by Caffarelli and Vaser and Kasman and was dying also independently here the heavy fact comes later on because you have the tail terms to control while the fractional catch up polytype inequality comes for freeing one line just by testing here the very delicate point is deriving a non-local fractional catch up poly inequality because the problem is local and forces things to be local here you want to go down and this is essentially the little pale decomposition type method that I was mentioning before anyway you can prove this where you get some sigma you don't care how large sigma is because the iteration in the georgis nashmosa theory are geometric so whenever sigma is larger than zero you are finally converging so this is the first result I proved in this paper which was actually written in 2007 and then the second result you see compare the classical catch up polytype inequality with the fractional one you get fractional derivatives full derivatives you get L1 here because solutions are not in L2 when you deal with measure data problems L2 here so this is the first line so any solution satisfies this and then the final stage is that any function satisfying this fractional catch up polytype inequality satisfies this bound according to the orthodoxy that everything must come from standard energy estimates and therefore if you combine these two methods then you are done and I think that this is a good point to stop thanks a lot for your attention