 I mean, the L infinity norm of 2 debubatory bounded by the L2 norm of solution itself. And then by, I mean, I know the average of V2 on B1 is less or equal to 1, so on B1 half is bounded by some constant, C0. So now, for the contradiction, you want to choose a theta so that C0 theta to the power 4 is less than theta to the power 2 plus 2 sigma. Sigma is less than 1. It's less than 1. So remember, the C0 here depends only on the dimension and the ellipticity constant mu. So therefore, the choice of a theta only depends on the dimension and elliptic constant mu there. And that's the end of the proof for the first lemma. Do you have any questions? Yes. Yeah, it's in the compactness. OK. Am I forgotten that, but we have it. First of all, a hat is a constant. And a0 is a limit of a hat. So it's a limit of a constant. So if you only take 1, you're going to have just a0 equal to a hat. But if you have a sequence, this will still be a constant matrix. So yes, that's right. A hat is the effective matrix. That's a constant. So for each k, it's already a constant. So you take the limit, it's still a constant matrix. So this limit here is just the limit in the Euclidean space, d by d. Thank you. So that's the end of the proof. So compactness. What this shows, the compactness theorem just tells you that if the limit is good, your sequence is not too bad. That's the idea. So the limit here has C2 regularity. And here we prove the sequence actually is uniform elliptics. So that's the idea here. All right, but this is not enough. In the second step, we're going to iterate this. In the first step, this is so-called one-step improvement. You only got a little bit. That's not going to be enough. In the second step, we're going to just simply iterate. This is an induction argument. And so sigma, again, is a number greater than 0 less than 1. And this, y0 and theta are the number produced by the first lemma. Now you have a solution b1. And now epsilon is much smaller, not just less than epsilon 0, but less than epsilon 0 multiplies theta to the power k minus 1. So theta is less than 1. So this epsilon could be much smaller than epsilon 0, depending on how this k goes. Then there are two numbers. One is a scalar, another is a vector. And so that when you average this on a ball of radials, epsilon theta to the power k, this is same k as here, u epsilon minus ek is a constant here. This is the same thing. It's a linear function plus the corrector's rescaled. So this guy is a solution. Multiply by a constant vector hk, this will be less than theta to the power 2 plus 2 sigma times k times the l1 average of u epsilon squared. And further, I have control on this hk. It's less than sigma l from 1 to k and theta to the power sigma l times the l2 average of u epsilon. Certainly, theta is less than 1. This is a geometric series here, which converges. But I want to stay there, put a k here. It doesn't want to go to infinity. So this is the second step, iteration, which we're going to see the proof now. And then we'll see how to use this estimate to prove the large-scale Lipschitz's estimate, interior here. All right, so the proof is done by induction on k. The k is k equal to 1 is already given by the first lemma. It's precisely the statement in the first lemma. And now we do the induction argument. Suppose it's true for k, we want to prove for k plus 1. So we have a solution in b1, where epsilon is smaller than epsilon 0 times theta to the k plus 1 minus 1. So the power here is k now. All right, so how do you apply a lemma So you do that by rescale. So you introduce a function v, so you rescale by just change the variable from x to x times theta to the k. And then you're going to, of course, change everything else in the formula. Of course, constant k will stay the same. So this constant, e k and h k, was a constant for the step k. And then you have to change the variable in this formula here. So variable x becomes ck times x. And now you have changed the same as that here. So the observation is that when you scale the solution, the parameter of the equation scales accordingly. So in other words, when you scale here theta to the power k, the epsilon becomes epsilon over theta to the power k. That's just a rescaling property of the operator. And I mentioned that in my lecture notes. That's a very important property which allows us to do rescaling argument all the time. So it's going to be a solution. v is going to be a solution, but for a different parameter. This new parameter, now you divide its epsilon over theta k. Because of this assumption, it satisfies the condition in the first lemma. So in other words, now we can apply the first lemma to the function v with parameter epsilon over theta k. Because it satisfies all the conditions there. So if you write this out, you end up with this line, this inequality. This is just by the first lemma. Everything is in terms of v. So v here, but you need to change the parameter to epsilon over theta k. So this is how it changed. And so theta is the same here. OK? And then the induction assumption allows us to bond the average of v squared by the average of u epsilon squared. You come up with this factor here. So in the first lemma, it gives you a factor theta to the power 2 plus 2 sigma. But the induction gives you another power 2 plus 2 sigma multiplied by k. So putting together becomes k plus 1. OK? So now you change everything back from v to u epsilon. And you adjust your constant. You'll see that the k plus 1, hk plus 1, is hk plus this new term, the extra term. But this extra term is under control. Again, this inequality is by the first lemma here. Actually, I have here. So now, OK, so that is the proof of the second lemma. Yeah, yeah. So here, the average of v squared, you look at what is a v here, and you integrate b1. Let's see. So yeah, this step is simply the induction argument there. So this one is not by the first lemma, but by the induction assumption that the lemma is true for integer k. It's not by the first lemma. Yeah. All right, so let's see how. So that's the proof of the second lemma. Now let's see how do you use this second lemma to prove the interior estimate. OK, so now you take a solution in a unit ball. You take R. As I said, you can assume R is small, but greater than epsilon. And you choose a right k, so this R sits between this power of sigma k plus 1 and power of sigma. And then you look at what the average of gradient, your epsilon squared on Br. First of all, you use a cartoply, because a cartoply holds for bounded measurable coefficients. So you don't want to work with a gradient, but you work with a function. The cartoply here, you want to subtract a constant. That's because subtract constant is still a solution, but the gradient doesn't change. So that is just the cartoply here. And then the second step is that you subtract and add a term. So this is just a triangle. It's nothing more than a triangle here. So now you have two terms. And the first term will be estimated by the second lambda. You generate this power here. Remember, R is roughly theta to the power k. And for the second term, you have a control on hk. Well, for now, you just copy the hk square. But this integral here is bounded by a constant. And let's see why is that case. Here, first of all, you have a linear term. The ball is centered at the origin, so x is less than R, or theta k. Or theta k is R, so cancel out this R square. But then we know that epsilon is greater than R, so epsilon over R is less than 1. So this integral is bounded by a constant. hk is bounded because of that estimate in lemma 2 for the geometric series, so both of these terms are controlled. So this is a BR. This is a B1. That's the end of the proof for the theorem. Any questions? So, well, you can translate the ball and dilate the ball, and it doesn't have to be centered at origin or radius 1. So now, suppose you have a solution in an arbitrary ball, assuming the coefficient is elliptic, periodic, and here it continues. And then the L evening norm on the ball of half of the radius of the gradient of the solution is bounded uniformly by the L2 average rescale. I mean, you have to multiply by, divide this by R, because this you have a gradient here. And then you have to multiply by R because there is skating F Lp norm here. So that is the interior estimate. So I want to come back to, before I move on to the boundary estimate, I want to come back to this lemma 2 here. You must see some similar thing happened in other courses. You can recast this estimate in the following way. So you take an infimum over E, the vector in R, D, and H in R. And you look at this rescaled, say, this is a Br. Ue epsilon minus H minus x plus epsilon times the corrector. Then E here is a dot product squared. And this is bounded by b1u epsilon squared. So that is actually what we prove in this second lemma here. I said in the beginning there's no c1 alpha estimate. I take that back. There is a c1 alpha estimate, but it's just taken in a different form. So in other words, you subtract a constant. You subtract a linear term, but you have to add a corrector in this place. So you can call this a large scale c1 alpha estimate. So the R here has to be greater than epsilon less or equal to 1. The solution is in a unit ball here. So this is how the c1 alpha estimate looks like in homogenization here. You can actually use this estimate to prove a Liouville theorem. I'll mention that. It's not in the notes, but I want to just put it here and see some applications. Here is a Liouville property. So suppose that A is elliptic and periodic. I do not need smoothness. This works for bounded measurable coefficients. And suppose you have an entire solution locally in H1. Let's say LU. In the whole space, it doesn't really matter if you put an epsilon or not because you just rescale to epsilon equal to 1. Equal to 0 in the whole space. And you assume that the solution does not grow more than a power of the radio. So there are two constants which depend on the solution could be such that the average, say, above radials R, U epsilon, well, just epsilon equal to 1 in this case, is less or equal to a constant Cu R to the power of 1 plus sigma for all R greater or equal to 1. So in particular, you can allow linear growth, but not quadratic. So then there exists C0, C1 up to Cd in all such that the solution U must be a constant plus Cj xj plus chi j of x. And we know this is a solution, of course. Constant is a solution. For each j, this is a solution. You multiply by constant, it's not a solution. And this theorem tells you that if the solution, if you have an entire solution which grows less than quadratic, then it must be a solution of this form. So that's a linear theorem. This is also true for systems, as I said. Well, they prove you simply rescale this estimate, all right? So you take, well, if you rescale that, so you take, OK, so if I rescale this estimate, you'll see that if I just take epsilon equal to 1, and R is greater than 1, but less than R, then you, and I estimate it's equivalent to 1 over R, say, B R U epsilon minus H minus x plus chi of x dot with E. It's less or equal to C alpha. Then let's see, I want R over R to the power of alpha, then you have 1 over R, B R U squared, and 1 half, OK? So that you'll see, if you have an entire solution which is, here alpha can be any number between 0 and 1. So if you have an entire solution which satisfies this growth condition here, and you fix R, you fix R, fix R, and you let the capital R goes to infinity, and then the right-hand side will go to 0. And this tells you that on each ball of riddles, all the solution, well, each one is 1, and U is just this function for some H for some E. And then you can certainly expand the ball and to show that in the whole space is of this form. So that's what the second lemma give it to you. And I think that this is actually an advantage of using compactness method compared to the method we're going to talk about tomorrow there by convergence So this, although the application might be limited, but it does give you extra information compared to the method by convergence rate. We'll talk about that tomorrow. Any questions? All right, so let's move on to the boundary estimate. So we're going to look at a Dirichlet problem. Oh, so my time is almost over. I'll just, so now you have a solution in a bounded domain. I'm assuming omega c 1 alpha, so we want to show this estimate. So the idea is that how do we carry out this compactness argument to the boundary setting here? So the key observation is that because of the boundary data introduced a boundary layer, something has to be done in order to deal with that boundary layer. The reason is that there is no problem with the first step. It's a problem with the second step in iteration. As you keep doing this, the boundary error adds up if you use the interior character. By interior character, I mean the character chi. So the idea is I introduce something called a boundary corrector. So in the Dirichlet case, it's called a Dirichlet corrector. So what's the definition? And now the corrector will depend on the domain, omega. And you solve a boundary value problem in omega with a linear function on the boundary, where the data is a linear function. Certainly, this solution exists because a linear function is in any class you can think of. And so the idea is that if you look at this function phi minus a linear function, at least on operator level, it behaves like a corrector reskilled. However, this boundary corrector minus a linear function vanishes on the boundary. As opposed to this guy here, it's not going to be vanish, although it's small, but it can add up. So this Dirichlet corrector minus this just vanishes on the boundary here. So this program was carried out by Fenghua and Rui. And in the paper I mentioned in 87, they show that the corrector, in order to carry this compactness, first of all, you have to already prove that the Dirichlet corrector is uniformly elliptic in epsilon here. So this is actually the key step here. I'll just go over this very quickly. How do you do it? First of all, you use compactness to prove the boundary Hewler estimate. If you want to prove boundary Hewler estimate, you do not need a corrector. You just carry over with just subtract a constant. And then once you have a Hewler estimate, you can use that to prove estimate for Green's functions. So that shows that the Green's function decays on the boundary of any order alpha, alpha less 1. The Green's function is Hewler continuous after the boundary. So then you use the estimate on the Green's function and some representation form to show that if you have a Dirichlet problem with a boundary data G, you can actually prove some preliminary form. So you average this solution on ball radius R greater In the right-hand side, you can put some extra term. It's the gradient of G, L infinity norm, and plus G, but you have an epsilon to the power of negative 1. Here G is on boundary, but we can certainly extend this to the interior. So this is one of the key step. And once you have that step, you are going to use the compactness argument in this setting here. So this d1 is just the ball intersect with the domain omega. Delta 1 is the surface ball. So you localize to a neighborhood of a fixed point on the boundary. And so dr is just say b, let's say 0, the orange is on the boundary, r intersect omega. And then the delta r is b, r intersect the boundary of omega. So this is the key to set this up here. So the similar program is carried out for the Neumann problem here. The key step is to introduce the Neumann corrector and prove the Neumann corrector is leaps us up to the boundary. And here it's OK. So I will just stop here. Thank you.