 So, hi, welcome everyone to the analysis seminar and today for me it's a pleasure to introduce my colleague and my friend, Eugenia, from the University of Buenos Aires, and today he's going to talk about what is a priori estimate for elliptic equations. So, thank you Andrea, thank you Manuel, I'm very comfortable here working with Presti. I'm going to talk about something that I started to study when I started my PhD like 10 years ago, and I decided to think about another product relating to this estimate. So, I will tell you about the results in the PhD and new results, okay, results that we obtained with Ricardo, Ricardo was my PhD supervisor, and we still work together. And so I will show you the results we obtained during the PhD and what we obtained now that is new and it's a generalization of that result. Okay, so let's start. You should just move also there. Click on the video. That should be okay now. Here? Yeah. The computer. Let me see. An introduction of the problem, the motivation of the problem. So, I will start with the easiest case that is considered the case of the Laplacian, we will consider the diligent left problem, or the case of the Laplacian. Okay, and we consider for our problem because it can be considered another type of domain, but we will consider a smooth domain. The regularity of the domain depends on the order of the equation. For example, for the Laplacian, we can consider a domain with a boundary C3. Okay. And so we state this problem. And it is well known that this problem has a unique solution you that can be controlled by the norm of the data. Okay, in which norm, for example, if we take a function F in LP for P between one and infinity, we can prove this weighty a theory estimate because big control the norm of the solution in terms of the data F. Okay. This was shown by any classical work by Agnus Douglas and Nidenberg. Okay. And in fact, they consider general elliptic operator, uniformly elliptic operator, I just want to introduce for the simplest case. So, this is an inequality with the constant in front. It's a constant, but I forgot to write the constant. Yes, yes, depending on the domain, of course, and the dimension. Okay, it can be depends on that constant. I'm sorry. Now, so from this equation from this problem from this estimate, we have several problems that can be solved or not, we will see. For example, I introduced the case of the Laplacian. Okay, so as I said before, it can be considered another operators that are elliptic operator, I will give you the form of that operator and what are needed for the operator to satisfy the weighty a theory estimate. But first I want to tell you in general what are the problems related to this program and then I will talk about this in more detail. Okay, okay, so this problem can be extend by for more general operators. Uniform elliptic operators, in particular cases, negative Laplacian. This was also done by Agnus Douglas and Nidenberg. In fact, they consider this operator, I just write the case of the Laplacian, but they consider this type of operators I will show you how they are later. And when we say an operator, it means a elliptic operator, we are talking about an operator involving differential operator, it means the derivative of a function of certain orders, and we multiply this operator differential operator by coefficient by functions that has certain properties. Okay, so in the case that I will show you later the coefficient are smooth. Okay, so we can ask if it can be extend for a bigger class of coefficients. Okay, I will show you in detail this later. Okay, I want just to state general ideas. Okay, so we can extend that for coefficient in a bigger class. And another program that can be considered is what happened if we consider less regularity of the domain. Okay, I say that for the case of the Laplacian because it depends on the order of the operator if I consider derivative of order two or two m. So, what happened if we want to consider less regularity. Okay, there are some results for polygonal and convex domain, for example for the case of the Laplacian. I don't know too much about this because I'm not working on this problem, but it's another possible problem to study for this type of operators. And another problem possible that it was the one I studied during my PhD is to consider another norms. I consider the last year, we say, you consider no, I will give the definition of that norms in a couple of slides. That's the problem we can solve. Okay, here are some typos here. Another problem that arise is what happened if P is equal to one because I say that the data F was in LP for P between one and infinity what happened P is equal to one. Which is a suitable subspace of L1 because in L1 it is well known that that problem does not hold. Okay, we don't have the solution and the a priori esteem. Okay, so we are already working on that problem with something else at the end of the talk. So I want to start to give more details about this problem I mentioned here. So, as I say, we will consider the problem of another norms. Okay, we consider F in LP. Now we want to consider the data F in weighted-level spaces. What we mean with weighted-level spaces? We consider this norm. Okay, we consider this norm. And what is W? W is a weight. What is a weight? A measure and positive function almost everywhere positive function. Okay, so we will consider this type of norms. Okay. Now, the special case we will consider is for Mach and Howe weights because a lot of theory of harmonic analysis is extended from levex spaces to weighted-level spaces for Mach and Howe weights. So, many results of harmonic analysis can be extended for the weighted setting. Okay, so we consider a Mach and Howe weights. So we remind for those who doesn't know the definition, we recall what is a weight in AP. If the supremium is finite and the supremium is taking over all cubes in RL. Okay, so we will consider this type of weights. And the range of the power is depends on the number P, okay, of the order P. We consider this problem. I didn't decide this because my supervisor proposed me this when I decided to start my BAT, but since he works also in numerical analysis, he knows that this kind of estimate has application to find that element method. Okay, when we want to approximate a solution of some elliptic equations, we consider a regulation of the domain and we approximate solution in each triangle of the domain. And the approximation are, for example, polynomials in each triangle, and we want to know if that approximation is good or not. What about the stability of that approximation? And the idea is that sometimes the, when you take some data or solving an equation, the data is not in LP, but it is in LPW and you can prove using that data is there. You can use it to prove that the approximate solution has a good conversion to the original solution. Okay, so it can have application. A priori estimate, in fact, the a priori estimate that are used are estimate for first order derivative. They are not need the estimate for derivative of second order, okay. So it has application. So let's state with details the problem. So I say, we consider omega a bounded domain, we consider this problem, now L is a general operator, and the operator is just for simplicity we will consider of order two, but this I will tell you today can be done for another order, any order two m, and the operator is of this form. Okay, we have, we consider derivative of order two multiplied by certain coefficients that are functions that sometimes has some regularity. For example, in the case that we study at the beginning, we consider coefficient in C3, okay. And for being an elliptic re-operator, we need some properties of the coefficient. So the properties for being uniform elliptic are this condition. Okay, in the sense of almond, Douglas, and Neyman, this is a standard. And of course, the example, typical example is negative lackluster, okay. So, as I said, the regularity of the coefficients in order to solve this problem and obtain a priori estimate depends on the order of the operator. Okay, so we will consider just the case of order two. Okay, so this problem was solved by Agmond Douglas and Neyman, they represent the solution in terms of a Calderon-Sigmund, something similar to a Calderon-Sigmund singular integrator operator, and using that theory they show this estimate. So, they represent the solution in terms of operators, and then using this representation formula, they obtain a representation formula for the second order derivative. I say second order because the operator is of order two. If the operator is two m, the most important thing to estimate is the derivative of order two m. Okay, so what we did with Ricardo, we consider this problem in the weighted setting for weights in AP. Okay, we consider the same problem and we try to extend to the weighted norms. Okay, so I will show you, I will mention what is the technique to obtain this extension. Okay. I won't mention a lot of results because I want to show you how is the connection between PDEs and harmonic analysis, how we use the techniques of harmonic analysis to obtain some results of PDEs. Okay, so I will mention the ideas of any result I will state today. Okay, so we obtain this estimate for the same problem, the same operator, the same regularity of the coefficients, the smoothe is the same, but now we consider a weight that is in the Mackenhall class AP. Okay, so what's the idea to obtain this estimate? So for the case of a smoothe, it is well known that a solution, the solution of the equation I introduced before can be written by this operator. So what is G? G is the green function associated to the equation. Okay, it depends on the domain of course. Okay, and the function has the function G and it's some derivative of the function G has a point white estimate that of course depends on the operator and depends on the domain. So we can consider a smoothe function here, know that it says F in C0 infinity, it means the function with infinite derivatives and compact support. So why we can consider this? Because these functions are dense also in LP and LPW. So then we can use an density argument to extend for any function F in LPW. So we can, if it's necessary, we consider this kind of functions. So using this representation of U, we can obtain, we can, yes, we can obtain two inequalities that are very useful to obtain weighty a purely estimate. It's not just the representation, you need to use the representation to obtain second order derivatives of the solution. Okay, so using the representation, I will show you later how is the expression for the second derivative of U. The first is in terms of the green function, the second order derivative of the green function, but using that representation, one can obtain two inequalities, one is a point weight inequality and the other is weighty norm inequality to prove the weighty a purely estimate. And it's also needed to prove the weighty a purely estimate the unweighted case. Okay. Maybe it won't be clear how we use the unweighted case because it's very technical and we have to make many calculations, but I'll give you the general idea. So the first inequality is this point was inequality between I will give you the definition now between the sharp maximum function of the operator. This is the sharp maximum function. I will write here the definition, the sharp maximum function of an operator G, okay, is a control by the maximum function, hardly readable maximum function of a function to the power s, everything to the power one over s for any s greater than zero. Okay, but what is the, what is the T is any Kaderon sigma operator, which is represent in this form. Okay, where the kernel K satisfies some properties the standard properties of Kaderon sigma, singular, the operator, the control of the size of the kernel. What is the sharp maximum function. It means that the function. Consider this sharp maximum function. Where is the here this sharp maximum function consider the supreme over the oscillations of the function f over cubes. Consider the variable maximal operator consider supreme over mean value of the model of f. Okay, so what, how is this related to what I was saying about weighty a priori estimate this estimate. This is well known that is used to prove boundedness of this type of operator in weighted norms. Okay, one, using this inequality can prove that this operator are bounded in LPW for W in AP. Okay, and how we are going to relate this to weighted a priori estimate, because it can be true. I will show you in the next ice. I think in one or two more slide that here we can substitute instead of TDF, the second order derivative of view. Okay, then we will see why this estimate is important to obtain weighty a priori estimate, but this inequality can be extended. Replacing here the data here data type in here. The second order derivative of view. Okay, the solution. So, this is the first key inequality at points where inequality. The other is a weighted norm. Sorry, not the way. Yes, we will consider the way. At first, and I'm sorry, first, I will show you the how it's written. The point was inequality in terms of the case of considering here. The second order derivative of view. And here, note that here it says omega, instead of any cube in our end, it means that when we consider the oscillation, the oscillation of our cubes, we will consider just cubes in omega. Okay. This, I don't give the proof of this estimate, but this can be proved. Okay, the extension of the inequality, the point one equality for any Calderon-Simone operator can be extended for second order derivative of the solution. Because the solution can be expressed in terms of some operator that has good point one inequality and due to this point one inequality, we can generalize the point one estimate. So, next, the second key inequality is the performance time inequality that is well known for the case of omega equal to our end. So the norm of f of a function f in the case of our end we consider just f is controlled by the sharp maximal function that is the case. I will write it here. For the case of our end, it is well known that we can control of the sharp maximal function. Okay, in the case of our end. This was proved by Pheferma and I understand that why we call Pheferma inequality. So this is the direction of that and a weighted case of that inequality because here is in our end. So it can be extended to domains, considering weights in the domain omega, and we can prove for weight in norms. And of course to prove this is based on the case of our end, but we have to make a lot of technical details, but the idea is, is similar. So we can obtain this inequality f omega it means the average of f over omega. So how do we use these two inequalities to prove weight in a purely estimate. So let's show you that if we substitute the, we need to bound the order derivatives in LPW omega but we can consider subtracting the average of f. And then this can be bound so it's no problem if we bound this difference because this can be controlled and then we can bound just this term. So how we can apply last two estimate. So we can bound by the Pheferman time we can write here the sharp maximal the local sharp maximal of the second order derivative of few estimate. So we can write here. Let's go back to the point why estimate where you see it here, then we can apply the point why estimate so I can hear right. m of the data f over s a p w. In fact, we can write, it doesn't matter because the support we can extend the function by zero upside omega and there is no problem. What happened here. The operator m that is the hardly the maximal function show you it's bounded in LP for any people. But here we don't have the norm in LP. This is like the norm. This is like the norm LP over s to the power here we have to write one over p. No, no, it's okay. This is like this norm be over s. Okay, so it is well known that the maximal operator m. It's founded in LP for the same thing we have been here, we have to write a weight in AP here. But here we have the norm LP over s. And we know that that view as hypothesis because we want to grow weight the appropriate estimate for weights in AP. We have that view in a P. What, how can we solve here and use the power less according to some properties of the market how it's here of the market how it's, it is well known that if a weight is in a P, it can be in a small AP. That is, that view is in a P over s for some s greater than one. Okay, sharp inequality the point wise sharp inequality it home for any s greater than zero any s greater than zero. So we can choose the s for which that view is in AP over s, we can choose that is, and we have the problem is here. So we have the bottleneck here, right this here over, it's always content dependent on the weight here I forgot the way I'm sorry. So here we obtain none of it. So we can follow the second order derivative of view minus the average of the second order derivative by a weight none of the data type. Okay, so in order to I won't make all the calculation just saying that this one can subtract the average. Here we write the average can't be bothered writing the definition and using the quality and using property of weight. So, in conclusion, we can obtain the weight here purely estimate using these two inequalities. So, I say that I wanted to show you how is the expression of the second order of the event that allow us to obtain the two inequalities, the point wise and the a feather man, the local feather man standing equality. So the second order derivative has this representation in terms of the second derivative of the wind function associated to the problem. And here, this is a bounded function. So this part of the operator is not problem. It is enough to bound this. Okay, in order to obtain weight the appropriate estimate. It is not trivial because this is a bounded function. And so it's enough to consider this operator and obtain the obtaining estimate for this operator. Okay, so, since this kernel, the second order derivative of the wind function is similar to a single integral operator. So this is a good estimate that can be used to prove the sharp, the point why inequality between a sharp maximal function and they have a little maximal operation. Okay, so that's the idea. As I say before, the local version of feather man is time inequality, waiting note can be extend but it is not trivial. I mean, the idea is similar but it is not trivial. Okay. And also I say that this can be extend for operator of order to end. Okay, I consider just for simplicity of the notation. I consider the case of order to. Okay. Now, what's the problem we consider now. Okay, that it's a part that we did during my PhD and next a couple of years ago, we consider the case of another type of coefficient because then we consider a smooth coefficient. What happened if we want to consider biggest class that of course contain the smooth coefficient, but all coefficient they are not necessarily smart. Okay, so which is a suitable class. So, to, to answer this question, we consider a work by the adenza for a long ago that consider the class of vanishing mean oscillation I will give the definition later. So we will consider if this class is suitable or not for our case of considering weighted norms. Okay, so we will see if this is suitable or not. And as the ideas to show this is different than the ideas I present here. Okay, because since we will consider a non necessary smooth coefficient, all the estimate for the green function. And we have the representation of in terms of this and we have estimate that depends on what estimate that depends on the question of course on the operator depends because this refunction is associated to the operator. So, the coefficients are not smooth, we don't have the same estimate. Okay, so we can use, we cannot at the moment we, we cannot use these ideas. Okay, so we will use some standard method of PDE to extend it for the vanishing mean oscillation coefficients. Okay, so we will define what is this space of vanishing mean oscillation function. So first we need to define what is a bounded mean oscillation function that is that the share maximum function is in what does it means that the oscillations are bounded. Okay, all the oscillation for all cubes in Rn are bounded. Okay. And for the case of a function in bounded bounded mean oscillation. We will consider the oscillation but for a small box. Okay, for a small box, and we will see that we will see now, I'm sorry, we will require for a bunch of TV in the vanishing mean oscillation space. The oscillation space is that the oscillations tends to zero according to the walls, the radius of the walls tends to zero. Okay. So bounded mean oscillation, the oscillation bounded and vanishing mean oscillation, the oscillation vanishing when the walls, the radius of the walls tends to zero. Okay, so we will consider the vanishing mean oscillation for the coefficient of our operator of our equation. So we will state the main result. So we consider an operator of this type where now the operators are in the class of vanishing mean oscillation. There is also a weight in AP, the domain is still smooth because as I say, we won't consider the case of non smooth domains, we just work with this class of domains. And so this problem for a data F and LP has a solution you that satisfies a the following weight is a very estimate this. Okay. So this is the solvability and the estimate and there is another result that we can state that this that the solution is unique the solution. If we consider F equal to zero, and we have the operator in a function new is equal to zero, then the solution must be zero. We will define what means zero here. Okay, and for a weight in AP and P, it is important to know that piece between a one infinity, because this theory of singular integral or similar to singular integral does not hold for P what you want. So we will consider the softness of the operators. Okay. So, we will consider this our left face is the way it's our left faces. I won't give the definition before the derivative. And when we denote zero here, it means the closure of the C infinity function with compact support and the closer it's considered in this space. Okay, according to this space. How the detail of the proof that I want to mention how we use the technique of harmonic analysis to obtain a result of this. Okay, because I, I like from these topics I like how we mix both areas. Okay, so this is very standard in PD is what I want to mention is not my idea. So, what first made the proof of the estimate for the case of a semi ball, what it is important to consider a semi ball, because the idea is one, when one consider a domain omega, a domain omega, the idea is to transform a neighborhood of the boundary in terms of a semi wall. Okay, and use that when we transform and we use the film or things to transform the neighborhood of the boundary. So the idea in general is this to consider first semi wall, because we will obtain interior estimate, estimate inside the domain and when we are close to the boundary of the domain. We consider transformation here, and if your more things in a semi ball, use the result well known in the semi ball and when we go back to the original norm, they are equivalent by this transformation. Okay, so one can make a covering of the domain, if it's inside, we can consider what's inside, if we are close to the boundary, we will transform in semi walls. Okay, so the idea is to prove in semi walls. Don't be afraid about this integral, it doesn't matter what I want to say is that the function you has a representation. Okay, it doesn't matter how it has your representation in terms of operators, okay, that has good property in here, what I want to see is that this different is important because since the function is in the functions are like vanishing in oscillation, this can be controlled, these operators are bounded, okay, it doesn't matter, this is like I'm forgetting. So we have to prove for the semi wall, okay, we have to prove for a semi wall, I don't have time to make the proof, but we have to consider the case of semi walls. So as I said, we consider to prove interior estimate, okay, and near the boundary by making a transformation of the domain into a semi wall. So this estimate, okay, the control of the second order derivative in terms of the data, and we have here this term that we have to see how can be eliminated. Okay, so one can prove first interior and near the boundary and using these two estimates, making a covering of the domain one can prove this kind of estimate. Okay, but here we still have this term that we need to see how to eliminate. Now, this part I said before that we transform the neighborhood of a boundary into a semi wall. So the operator, the original operator, we transform the domain, of course it changes the equation and the equation has it forms. The coefficients, B, Y, J are of the same type of the A, J of the original equation, we have one equation and when we transform it's appear a new equation the important here is that is of the same type. Okay, the coefficient, it can be proved that the coefficient are also in the vanishing mean oscillation. Okay, so if the coefficient are in vanishing mean oscillation, we can consider the case of the balls because here, when we prove interior estimate, here it's but say instead of semi wall, we can put here ball. So, so when we make this defiomorphins, the coefficient are still in vanishing mean oscillation, the transformant data, this is the transform data is also in LPW, but not W, the transforms way. Let me show you this. We transform the equation, the new equations are in the new coefficient are still in vanishing mean oscillation. The AP class is invariant under defiomorphins so the way transforms is in AP. The weighting norm of the solution be a new R equivalent. And for the result of the, it says one that is the case of the semi wall but the case of semi wall is similar to the case of the ball. So one can make a covering of the domain to obtain the estimate I showed you before with the terms, let me go back. We can obtain this estimate by making the covering. This is just what I said before, but I don't follow the order of the slides. So, we can obtain by using near the boundary and interior estimate and making a call. Okay, and it is important that we obtain a equivalent norms that by using that this way is also the transfer weight is also in AP. Okay, and for proving unit next is necessary to use the Pucci-Alexandrov maximum principle that is well known in PDEs. So we have this inequality for a solution of this equation, the U in this space, in this sub-left face, and of course it holds the opposite with the infimum. So we can use this maximum principle to prove the uniqueness of our problem. How? It's necessary to, it doesn't follow this calculation, but the idea is to reduce the weighted case to the unweighted case. So it can be proved that the solution, if it is here, I don't know if you notice that it says, let me show you, here it says the following. If we know that U is in some sub-left for some Q and the F is in the PW, then we have that for any P greater or equal than Q, we have this estimate. Okay, so we are going to apply that in order to obtain that we can choose a Q here greater than P. Here P plays the role of Q in the inequality I showed you before and here P plays the role of Q. So this exponent is greater than P. So according to that result, the U is in this sub-left space with exponent here Q. So by a header inequality and using property in a play way, we can show that the solution is here and we can use the Pucci-Alexandrov maximum principle. We need to show that this integral is finite, we can do by a header inequality and use it that the dual weight is in AP. This is in AP prime, so this integral is finite, so we can obtain that the solution must be zero. So the idea of this is that we reduce the weighted case to the unweighted case. The last step, no, this is yes, it's the last step, we need to prove solvability and the estimate, so we need to eliminate the term quality I showed you before. So the idea is a standard to prove solvability, the idea is to approximate our coefficients that are in the vanishing minoculation class, but see infinity coefficient. Okay, this is well known, so for the solvability is enough to use the solvability for smooth coefficient and approximate the solution that we want to find by solution in the smooth case. And to eliminate the lower order and obtain this estimate, it's necessary to use some properties such as the compactness of the ball in the weight is solvable space and the compact embedding of these spaces. Of course, it's very technical, but the idea is to suppose that this inequality does not hold, so we have the opposite inequality, it means that there is a sequence of operators of the setup of L and a sequence of function in the solvable space such that we can consider that the norm is one, and this is zero. This is equivalent to consider that this does not hold, and one can obtain a sequence here, a sub-sequence here, conversion because of the compactness of the ball, a sub-sequence conversion in the weak sense, weekly, and since the compact embedding, this compact embedding, this converges strong, U M converges to U is strong, but in W1P, W, in W1P, because that means the compact, that the embedding is compact, and the idea is to use the distance to zero to conclude that then the operator L in U must be zero, U is this function, and by uniqueness, the idea is to use uniqueness and to obtain a contradiction because the norms of this must be one, so U cannot be zero, but it's not just using that contradiction, because here is a sum of norm, so it does not contradict immediately, one has to use this inequality, we have here, maybe it's very technical, I explain it because I already studied the topic because maybe it's not easy to understand it, so we need to use this inequality because if we don't use this inequality, why we broke this inequality? It's needed to use this inequality and also the uniqueness of the problem to lead to a contradiction, right? So this is an standard method to prove the, to eliminate the lower order of the estimate using a contradiction, and I want to mention something about a future work that is, I say something about this that since the operator appearing in this representation that are singular integrals or commutator of singular integrals are not bound to the LPW, we cannot solve here this problem, but we consider a suitable sub-space of L1W that are the hardware spaces, okay? I don't have time to tell you about that, but we are still working on that problem. So that's all, any questions or comments? So you mentioned on your last slide that you're looking for potential ways of exploring the sub-space of L1, weighted, right? Yes. The original problem in the sub-space of L1, just maybe with the usual hardware space has already been considered, I assume, no? Yes. What's the status on this? So what can one say about L1W? Yes, it was proved for, at least for second order derivative, a function F in the local hardware space, because the hardware space has the good property that if you multiply by a good function, they are still in H1. And so when you consider the operator of these types, you consider the H1 of these problems, because this is not necessary in H1. So it was shown that for a function, a function because it's not being general for P equal to 1, we have function not distributions. So for this equation is you can prove the uniqueness, the solvability and the priori estimate. We're investigating some of these problems. Or some more general operators on weighted spaces. Another question or comment? I have a small question maybe, I don't know if this has sense, but at the beginning you mentioned that you work with elliptic operators and that the coefficients, the magnetic of the coefficient is symmetric. Like the Aij is symmetric, but could we also consider the non-symmetric case? If they are a standard condition for elliptic equation, that's the, when you say that something is elliptic, you consider some standard condition that is in the elliptic equation. Okay, no, I just asked because some other problems that work with kind of rough domains, sometimes they look non-symmetric. So I thought that maybe you was doing it for something special. Okay, any other question or comment? If not, let's thank Eugenia. Thank you so much for coming.