 Okay, I want to thank you again for coming back to this last lecture on elliptic homogenization. So today we're going to still looking at the same problem, the large-scale regularity of a solution to a elliptic equation with all-soritory periodic coefficient. So last time, yesterday, you see that we were able to use the compactness method to establish the interior and also the boundary regularity, Lipschitz regularity for the solution. However, toward the end of the lecture, you'll see that it's compared to the interior case, the boundary regularity is quite involved because in order to apply the compactness method, we will have to a priority prove that the boundary corrector, the Dirichlet corrector for the Dirichlet problem, the Neumann corrector for the Neumann problem, is Lipschitz uniformly in order to carry this compactness scheme. Okay, so that's not a trivial task, even for periodic coefficients, we have several steps. So today we're going to present a different approach to the problem and this somehow, this method, is somehow related to the talk you had in the morning that is in order to prove the Lipschitz estimate, we're going to look at the excess decay of the solution, you set up by, look at the difference of the solution with an linear function, you minimize all the, among all the linear functions you'll see. Okay, so the setup is the same, we're working with a family of second order elliptic operators in divergent form with a parameter epsilon and we're assuming the coefficient is real, bounded, measurable and uniformly elliptic and also A is one periodic, okay? It's all the same as the previous lectures. So precisely you're assuming the matrix is bounded and it's positively definite, and it's periodic with respect to the integer lattice. So notation-wise we're just working with the scalar equation but all the theorems and proofs extend directly to second order systems in divergent form. There's no real difficulty, nothing special is used for the scalar case, okay? So we proved this theorem yesterday using compactness method. This is an interior large-scale Lipschitz estimate. I mentioned how once you get this estimate, that you can combine with a small-scale estimate by a blow-up argument follows from the classical result to obtain a full-scale Lipschitz. But here we do not need any smoothness conditions on the coefficient, just being bounded and measurable by the elliptic and the periodic. So you have a solution for the simplicity. I'm also assuming the right-hand side is zero. You have a solution in a unit ball, then you look at the L2 average of the solution, the gradient of the solution on a ball of radius R. R is between epsilon and one. So the most important case actually is the case R equal to epsilon. Once you have that, the remaining range follows. It's bounded by the L2 average of the gradient of the solution on a ball of radius one, okay? So this is where you see the detailed proof yesterday. There are two steps. The first step is the compactness. You gain one improvement, and then in the second step, you use an inductive to iterate the gain you obtain in lemma one, all right? So that's the idea. Okay, so today we're gonna talk about, instead of a compactness, it's approved by contradiction. You argue by contradiction. We're gonna look at a direct approach. This is due to Charles Smart, who is here, and also Scott Armstrong here. There are advantages to, this advantage compares to my third here. So this direct method, there's no compactness. Theorem is needed, and we do not need to involve collectors, at least not in a direct way. And therefore it's applicable in non-peractors. Even in the periodic setting, we can use it to derive boundary regularity because we do not need to, a priority proof of the boundary corrector is elliptous. So it works in a setting of almost periodic, and also the random case. So you'll hear more about next week, okay? So the disadvantage is that this approach will require a convergent rate, even though it's, you only need a linear rate, quite a very weak linear rate. And this, in the practice, this convergent rate can be established by using so-called approximate correctors. We'll talk a little bit toward the end and what is approximate corrector in an almost periodic setting here. Okay, so what is this? So the idea is, you'll see this already in other lecture. So you're gonna measure the access decal rates relative to a linear function. So UEPU is a solution, okay? So we're gonna subtract a arbitrary linear function, and M here is a vector, so it's a dot product here. M is a scalar case, okay? If you have a vector, then if you have a system, then you're gonna have to deal with a matrix here. And then you average the difference in L2 in the ball radius R, and then you mobilize the quantity by the Y, this by R, because this will be in the scale of the derivative, okay? So then you take the infimum among all vectors, M, and all real numbers, R. So you try to approximate your solution on an average sense by linear functions, okay? So the idea is, I suppose you have a solution, say, in B1, if it's small, if less than one half, I already indicate that if it's greater than one half, all the results follow from the classical one here. So what we're gonna do is we're gonna show that this quantity, this is already scaled, H of R is bounded uniformly by the L2 average of a solution on a ball of rails. One, and this is true for any R between Y and one half, okay? So if between one half and one, that would be a trivial statement. So for any R greater than this small scale, but less and up to some fixed constant, okay? So this is the goal. So once you have this, then the Lipschitz estimate, large scale Lipschitz estimate follows, okay? So the key observation in this scheme is the following. It's approximation here, so idea. So what is the idea? So we try to prove this is uniformly bounded. So we're gonna look, compare this function at the two different scales, R and theta times R. Theta is some small but fixed number here. So let's just try to look at what is H seed of R, okay? By definition, it's just the scaled average, I mean, scale average in fumule of UY minus a linear function. Okay, so the first step is that you use a triangle. Very simple. So you're gonna replace your solution by a function W to be chosen later. Then you, of course, by the triangle, that's the error term you're gonna make. So the difference here, there will be UY minus W and you'll still integrate, you'll still average this on the ball radius theta R and divide by theta R because that's in the definition of theta R here. Just first step, it's nothing more than a triangle, okay? So in the next step, we'll say this W, it's supposed to be a very nice function, okay? And so for a nice function, say, in the same one, alpha, it has some scaling properties there. If you go from R to the scale theta R, this excess decay will be small. So you have one half factor come up, okay? So this is a pure property of this function W to be chosen later. It can choose whatever your function you have, you like, okay? And so that is the second inequality here. The second term here is copied, all right? In the next step, we're gonna go back from W to U to the solution U sub-optional again, okay? So you simply replace W by U sub-optional and then you'll see that the definition is just this, this go back to the definition of H over R but you carry a factor one half, okay? So when you do that, the triangle inequality will introduce another error term when you combine this with this term here, it's bounded by a constant which depend on theta divided by R times this L2 average of the difference U epsilon and W here, okay? So the choice of W is yours. So we want, so the choice we're gonna make later on is that we're gonna choose that W to be a solution of a constant coefficient, second order, in some ball of radius R and we have all the regularity we wanted. And so in particular, the function is in C one alpha and so this term give you one half if theta is sufficiently small, okay? So that's all the idea that it is in this scheme. So that is the key observation. So roughly speaking, what this means is that if you have a function which can be well-proximated by some function with a C one alpha estimate, C one alpha estimate, scaled version of C one alpha, for all the scale greater than this parameter epsilon, then your function will be Lipschitz, okay? It doesn't have to be a solution. This is a general kind of a statement here. All right, so let's just see how do we carry out this scheme in the interior case and later on I will just mention how do we set up for boundary estimate. And the idea is the same. It's the technical, it gets a little complicated but the main idea is already gonna be shown in the interior case. All right, so again we go back to this function for HR which measures the excess decay of your solution relative to a linear function, okay? So you take an infimum among all vector M and real number R, okay? So let's see this infimum is obtained by some vector. I'm gonna call MR tilde, all right? So the infimum is still taken with respect to Q. That's the definition of MR. So MR tilde is just some vector which achieves the infimum here, okay? So then I'm gonna take the magnitude of this vector MR which certainly depend on R, of course also depend on the solution, UYPtion call this quantity by H of R, little H of R, okay? So then let's see how we, again we wanna estimate the gradient. In this setup here, everything is expressed in terms of the solution. You do not want to involve the gradient of the solution because we know much about the solution of the equation here. You wanna set this up in terms of the solution itself here. So, but as you know it's, you can bound the average of the gradient by the average of the solution by a control point, okay? So here is that by the price you pay, you go from B of R to B two of R and you can subtract any constant because UYPtion is a solution, you subtract constant is also a solution. You take an infimum in Q here, okay? So this quantity can be bounded by capital H of two R plus little H of two R because that just you, again by a triangle you put a MRtutor here and then you just, just a triangle inequality, okay? So remember that we wanna bound this L2 average of the gradient on the ball of riddles R uniformly. So now it's suffice to bound these two terms, H of two R and little H of two R. For any R between each one and one half, we can now go below one half. And so we're working with a bounded measure of coefficients and there's no smoothest coefficient, smoothest assumption on the coefficient here. All right, so that's, so now what I wanna convince you is now, now the problem is how do you control these two quantity, capital H of two R and little H of two R here? These are the definitions, okay? Do you have any questions so far? Okay, so if not, I wanna stay a general theorem. So this, a version of this theorem can be found in a paper by Armstrong and Smart, but this version here is, it's purely one of my papers here. We can try to formalize the procedure here. So here you have, you have two, I mean two functions, okay? It has nothing to do with PDE here, just pure, I don't know. It's a Kalkers problem here. You can find a proof of this theorem in lecture notes. There's a detailed proof, it does not involve much. There's a Fouvinian theorem, that's all, okay? So you have two functions, capital H of R and a little H of R and I'm assuming there's two functions, two negative defined between zero and one, okay? You have a parameter option here. I don't wanna assume this function is increasing, but I do wanna assume that you can compare the quantity of this function capital H if you go from R to two R. So for T between R and two R, the values are all bounded by H of R, okay? So the properties are all modeled by looking at the application we have in mind here, okay? So that's one condition. The second condition is that for the little H of T and I want that for any T between R and two R, the difference controlled of HT and HS can be controlled by H of two R, okay? And so that's the second condition. The third condition is that the value of the function H at a seat of R is bounded by one half of H of R multiplied by this arrow here, term here. So here is an omega of H, omega of H over R multiplied by H of two R per little H of two R. And this actually is the quantity we try to control, okay? So assuming this is true for any R between each one and one half, okay? There's a fourth condition that this omega is, say, increasing the continuous function, omega of zero is zero, but I want this condition to also satisfy a Dini integral condition. So in particular, if omega of T is a power of T for some positive number, then this condition is satisfied, okay? So this is a Dini condition. And the conclusion is that then H of R plus little H of R is bounded by the value at one, by their value at one here, okay? So R is any number between epsilon and one here, okay? So this estimate here is the one we wanted to control, to bond here. So H one and the little H one can be bounded by the L two average of the solution on the scale one. So we're able to go from scale epsilon to scale one. That is the idea, okay? Do you have any questions? Okay, so going, questions? So going back to the theorem here, you see, this is the quantity we had earlier that we want to control H of R, and you have this. So now the problem is that how do you control this error term? So in other words, if you have a solution, can you approximate in the L two norm by some function with C one alpha estimate? And you also want the control, the rate is satisfied at Dini condition. So this will be done by a convergence rate in homogenization. So as far as the scheme is concerned, I haven't used anything related to the operator, okay? Oh, anything related to homogenization here. So that's, so we're going to go back to the homogenization problem. So the problem now is I suppose you have a solution on the ball of radials to R and R is greater than the parameter epsilon. And we want to find some W and satisfy some equations with constant coefficients, so that the average, the average of U epsilon, L two average of U epsilon minus W is bounded by a small quantity, omega of epsilon of all times the L two average of the solution on a ball of radials to R. And so further we want this quantity omega to satisfy the Dini, okay? So that is the problem. And also by reskating, you, it's suffice to prove this fact for the scale one. And the operator in its form scales well. So if you can do this for each, for all equal to one, then you can do this for any all, okay? So once you have this estimate and that will produce this inequality we wanted. And that is, and that will be, that will be, and this will allow us to use the general scheme in the previous page, okay? So now it becomes a convergence rate problem here. Okay, so they, so I want to show you a theorem on convergence rates. This is not necessarily sharp. And because we do not need sharp convergence rate, we only need a rate that is Dini, Dini here. So we have a bounded deep use domain. You have, you have a dd-crab problem, okay? Let's just look at the case. Well, at the right hand side is zero. Boundary data is F, okay? So U zero is the solution of the homogenized problem. So in other words, U zero satisfy the equation, the U zero satisfy the equation for the operator L zero, the homogenized operator in omega with the same boundary data as this one here. So the estimate is that you can, the ultimate one of the difference is bounded by epsilon to the sum power alpha for some alpha positive. That's all we need in order to carry out the scheme. So the leapschees estimate require little as far as the convergence rate is concerned. If you do have a symmetry, you can actually take alpha to be one half, but we will not need that, okay? So all we need is some alpha positive. And so, because that will give you a Dini convergence rate. All right, so how do you prove this? So we go back to the stuff we talked about in lecture two, establishment convergence rate. So we're gonna look at U epsilon minus U zero minus epsilon times the corrector, and then you multiply by a cutoff near the boundary, and then times a smoothing of gradient of U zero. Remember, we do not assume any smoothness conditions on the coefficient. So here we'll have to smooth out the solution here. We do not know the corrector is bounded. We know it's bounded in L two on every cell there. All right, so actually the method we, presented on the second lecture, give us this estimate. So you can actually bound, this actually give you the convergence rate in H one. It's bounded by epsilon times the L two norm of the gradient of the two derivative, second order derivative in the interior away from the boundary by epsilon, okay? Well, this is omega minus omega epsilon. Omega epsilon is a layer, it's a boundary layer of a thickness epsilon. So this integral taking place away from the boundary, but you do have a second order derivative. In the second term, it's no problem because you do have an epsilon here. In the third term, you integrate in the boundary layer, but there's no epsilon in front of the norm here, okay? So we'll, and then by Pankare, you can just give you the L two norm of W epsilon and then just simply throw this term to the right hand side. You have L two norm for the difference U epsilon and U zero, okay? And you still have the same right hand side. Okay, so now the goal is to generate some powers for each of this term. The second term, we don't need to worry about it. Epsilon is already here. We can control the L two norm of the gradient. We have to generate some power of epsilon from the first and the last term here, okay? So that is the idea, yeah, okay? So we're gonna talk about how do we put up, generate a power from, for the first and the last term. For the first term, we can use the interior estimate, but remember U zero is a solution of equation with constant coefficients. So there's an interior, you do have an interior estimate, and if you use the second order interior estimate, it give you, and also the, the hudders inequality, it will generate a power of epsilon to the negative one half minus one over P times the L P norm of the gradient here, and P is any number greater than two, okay? All right, so the, so that, remember, you still have an epsilon here, so you put this back, and so that will kill a power of one, so you're still left with one half minus one over P. P is greater than two, so this power here is positive, okay? So the problem here now is that you, the price you pay is that the P will be greater than two, and now we can use sort of the mayor's type estimate to bond this, the L P norm for some P greater than two, which depend on the ellipticity and times the each one norm of the boundary data. So together you got a power of sigma, sigma is one half minus one over P, that P greater than two, so this will be positive, okay? So this argument does not require any regularity on a coefficient, and also it works for leapshift domain, okay? No more than leapshift domain, omega there. Okay, so any questions? All right, so that's the convergence rate, I mean that's, however, that's not enough, so here we have one more step to go to generate the approximation estimate, we need it. So again, here's a theorem, we have a solution, say in a ball of radios two, and then I claim that there is a solution in a ball of radios one for the homogenized equation, but the ball is being B one, so that the L two average of U epsilon minus W is bounded by epsilon to the power of alpha times the L two average of the solution for some positive alpha here, okay? So the difference here is that we do not need this function W to be a solution in B two, so we're gonna work with a smaller ball and that give us freedom to choose the W here. So the W here is not necessarily the homogenized solution because we do not even specify the boundary data for U epsilon in B two.