 h2 norm of u0 times a factor like this. There are two terms. The first term has a factor, y, but here is the gradient of psi integrate in the whole omega. Second factor has a square root of y, if you're small. So this actually is worse, but you integrate actually in the neighborhood of the boundary here. So that is this lemma here. We'll see that the second theorem follows easily from this lemma, once you have this lemma down there. All right, so how do you do it? To see the second theorem, you simply take your test function to be w, y. That would be a function in h1, 0 of omega. So you take psi to be w, y, and use the ellipticity to bond this from below by the l2 norm of the gradient of w, y, l2 norm squared here. And then you simply replace. You'll see this, you can actually replace the second term by the first. Well, first, bond is two terms by square root of epsilon times l2 norm of the gradient of psi. And so that will give you this estimate. You can do better than this, and now that's down in the lecture notes. And so this is a very rough use of this estimate, but there are other things you can do which actually produce a better estimate. But I just want to get the idea across using a simple case here. So now let's see how do we prove this lemma. So there are some calculations. I'm going to have to show you on the board. Well, actually, I don't think I have time. But it's pretty straightforward. You take the double y option and you take the gradient. There are three terms in double y option. You take the gradient of each term. And this is the first term, second term. In the third term is a product. So the gradient by product rule will generate two terms. And the last term, because the derivative goes to the second factor, your y option is still there. But you lost your y option on the first term because when you take the derivative of chi of x over y option, you're going to have an y option in the denominator which cancels it out. So the last term will be, it's a good term. You already have your y option in place. So for the rest, we're going to have to deal with how do we deal with the second term, the first three terms here. So we're going to simply put this term a hat, the gradient of u0, up there. And then we'll be replacing the gradient of u0 by the COVA function multiplied to the smooth in of the gradient of u0. So you generate a difference. You generate a difference here. And when you combine all the remaining terms, you end up with a b, b of x over y. The b is the function, the matrix. We introduced earlier, which has these two properties. One is that the mean is 0. Another is the divergence of each column is 0. So now let's simply multiply the gradient of a test function to the both sides and integrate it on omega. And then we use the fact that both u0 and u0 are solutions of the Dirichlet problem. So these two integrals are the same. Because the right-hand side, the fourth term, f, is the same for both boundary, both Dirichlet problems. So this is equal to 0. I mean, this is all the same. So you end up with these three terms here. So I want to claim that these first two terms, you're going to have your epsilon in place, or maybe square root of epsilon. Well, how do we know that? Well, look at the first term here. Remember, eta epsilon is supported in omega by vanish near the boundary. So in the interior, I mean, away from the boundary by, say, 4 times epsilon, 1 minus eta epsilon is 0. So this term, this integral, only taking place in the boundary layer of omega. And with a boundary layer, if you use a boundary layer estimate, you can increase the derivative. You're going to generate an epsilon in place. But the price you pay is that you're going to have to go to the second derivative of u0. So that's fine. For this term, here you have a smoothing. Here you have a function. And because of the second lemma, the difference between a function and smoothing can be estimated. With epsilon, again, the price you pay, you have to go to a derivative of the gradient of the u. But that, we can afford with that. There's a second derivative. The last term already has an epsilon in place. So the problem is, how do you deal with this third term, which you don't see any epsilon here? How do we know you're going to have epsilon come up, or square root of epsilon come up? So this come up with the property that we're going to have for the corrector. So here's the place. We use the symmetry of the flux corrector. So the calculation goes like this. So you have bij of x over epsilon s over epsilon du0 dxj and d psi dxi times eta epsilon. So we're going to write this as epsilon, take derivative of the flux corrector kij of x over epsilon, then s epsilon du0 dxj and d psi dxi times the cutoff. So that's one of the properties of the flux corrector here. We're taking the derivative k summed. When you take a derivative, there will be an epsilon on the denominator. So we need the epsilon here to cancel it out there. So the next thing you do is that you're going to use the product rule. You want to throw this guy here into the derivative, pi kij of x over epsilon and d phi dxi s du0 dxj. And then you have, of course, this is not correct because we have to product rule. We have to generate another term, which is in a form of epsilon phi kij of x over epsilon d square psi derivative of force here dxk dxi du0 dxj and eta epsilon. And because of skew symmetry, if I interchange i and the j, it's the same thing. But this guy has a minus sign come up. So this last term actually is 0. So that's the calculation showing on the screen there. OK, so you see how this factor epsilon come up. Yes. I don't have an epsilon. No, third term. Oh, fourth. Yeah, yeah. Gradient affects the eta epsilon? Yes. Wouldn't that give you a 1? Yeah, very good point. So if you take a derivative of a gradient, you're going to have an epsilon denominator. But the gradient of the eta epsilon support near the boundary. So that will give you a back square rule of epsilon. Very good point. Thank you. Yeah. So that's all you need to finish the proof. So you see how this flux correct is used in the process? We use the definition of the effective matrix and the correct. Of course, you know you're going to use it somewhere. Otherwise, you can just replace the effective matrix by some other matrix. It cannot be true. OK? All right, so that's the proof of theorem true, the convergence rate in H1. And now let's come back to the first theorem, which we want to show that the convergence in L2 with a power epsilon here. So first of all, the convergence here. So the way to do this is through a duality argument. So we're going to solve a Dirichlet problem with a zero boundary data. But in the right-hand side, we choose an arbitrary function in state 0 infinity of omega. OK? And then we introduce the sort of analog of a double epsilon for this problem. So the v epsilon minus v0, which is the homogenized problem, minus this epsilon times the corrector for L epsilon star and the same kind of stuff. But we do this for the operator L epsilon star, the adjoint of L epsilon. OK? So the chi star is the corrector for A star, for the adjoint of A. OK? So let's test this W epsilon without the derivative against this function g, which is arbitrary. So we replace this g by the divergence of this. So by definition, you have this one. You write out v epsilon. You're going to have three terms come up. And we're going to have to estimate each term, 1, 2, and 3. OK? Again, the goal is to generate a power epsilon. We do not like a square root. We want to do better in L2. We already have a square root in H1. OK, so the first term is of this form. So you have a integral of A times the grain of W epsilon times the grain of R epsilon. Both of these generate the square root of epsilon. So putting together, you have a power 1 in place. So that's the first term. It's fine. OK? Here, I will need some smoothness condition for omega because I need to have H2 estimated for L0 star for this last step. And that requires, say, for instance, C11 will be suffice. OK? OK, so the second term, you have this integral of A, grain of W epsilon times the grain of V epsilon. Here, we're going to simply use the main lemma. And you treat V0 as a test function. OK? So V0 is in H1 0. So we can take it as a test function. You generate two terms. The first term gave you power epsilon. The second term, you have a square root of epsilon by its integrate on the boundary layer. If you increase the derivative to 2, then you have another epsilon from the boundary layer estimate. So go get back to power 1 again. OK? So finally, the third term is, again, we use the main lemma. And you're going to have to estimate, treat this as a test function, the whole thing as a test function. You have to estimate L2 norm, L2 norm on the boundary. And it turns out it works out just perfectly fine. OK? So putting this together, we have this estimate for any function g in C0 infinity of omega. So by duality, you have L2 norm of W epsilon is bounded by epsilon times the L2 norm of U0. And so that's the W epsilon. There's a third term. But third term actually is already in good term. And so just throw it to the right-hand side. That's what we have. So that's a proof of theorem, too. In the lecture notes, there are more elaborate estimate involved non-tangential maximum function on the leapshift domain with symmetry conditions. So I don't want to show you that it takes too much time. You can read them. But you already see the main idea I have for this simple setting. OK? OK. So actually, I have a theorem here. Suppose you actually have a symmetry condition. Again, I do not need smoothness bounded measurable. It will be fine. You can actually do better than you can actually generate a PQ. Well, P and Q are related by this equation. This is interesting because this relation is the right scale. It's the right scale. Because you can scale both sides, and the constant will remain invariant. And also for h1, you can actually get a more precise estimate in the right-hand side. Instead of u0 h2 norm, you can get h1 norm of the boundary data plus some LQ norm of the right-hand side. So I want to spend the last 10 minutes, maybe five, talk about one application of this second estimate in the context of a boundary value problem and a radic estimate where some of you knows really, really well here. So before I get to that, let me briefly mention that the same argument can carry over for Neumann. And that's one of the good things of using cutoff. The boundary data actually does not enter into the play. So we do a cutoff on the other boundary. You can do the same proof for Dirichlet and Neumann boundary data here. So you have the same estimate. So going back to this h1 estimate, I want to mention how this will relate to some kind of radic estimate. I call this large radic estimate in homogenization. So if I write this, one of the estimates, the first one there, so that is what we're claiming is that suppose you have a solution to a Dirichlet problem with a right hand side 0, boundary data f. The second, the first line, so what we claim here is that if you integrate the gradient near the boundary layer, so this is less or equal to constant times epsilon. And this is bounded by a constant times epsilon times the tangential derivative of the boundary data f. So that's the first equation. The second inequality related to the boundary to the Neumann problem here. OK. So why do we call this large-scale radic estimate? By the way, again, here, this is true for Liebstein's domain. A is elliptic, periodic, and symmetric, but we do not need any smoothness condition on a coefficient. Just bounded measurable. We'll give you this one here. So why this is related to the radic estimate? So I want to bring up the small-scale radic estimate. So let's say you have a domain omega. You pick up a point on the boundary. So this is a local radic estimate. And we claim that if you integrate on this piece of boundary, say with the radius of epsilon, say this point is p here, the gradient u epsilon is a solution. d sigma, this is bounded by a constant on a bigger ball, so the gradient of f, plus a constant over epsilon times a volume integral in a ball centered at the p of radius 2 epsilon intersect omega. So here, I'm assuming l epsilon of u epsilon equal to 0 here. So this is true if I assume A is further, continuous, and elliptic. And I do not need periodicity. This is a small-scale estimate because you can scale. It's suffice to prove this for epsilon equal to 1. And then scale and blow up to scale epsilon. And this is true. So we do not need periodicity. And then if you cover the boundary by balls of radius epsilon with a finite overlap, you can deduce that the u epsilon squared d sigma is bounded by a constant squared d sigma plus a constant over epsilon times, say, the distance of x to the boundary, the boundary layer, some constant times d u epsilon squared. This is a volume integral. These are surface integrals. So this is, again, it's a small-scale estimate. We do not need periodicity. And as you can see, the large-scale here, if I combine this two-scale estimate, small-scale and a large-scale estimate, this estimate allow you to control this volume term by the first term. So we have completely separate two scales. In the small-scale case, we only need the operator to be elliptic and the Hewler continues. In the large-scale case, we only need the operator to be elliptic and periodic. But if you have all three, you can combine them to generate a four-scale radic estimate. So this is actually was approved by Kennegan and myself back in 2011 using a integrand by parts. So here, we present a different proof here. So you've got a four-scale that the boundary, the four-grading is controlled by the tangential and also by the co-normal derivative, with a constant independent of epsilon. And the coefficient assumed to be elliptic, periodic, symmetric, and also Hewler continues. Omega is elliptic. So with this, you can solve the L2 boundary value problem using the method of layer potential. And this is also done in the same paper. Because once you have the radic identity, then you can prove using layer potential to show the estimate for the non-tangential maximum function. So going back, how this is done? Well, it's a simple consequence of this what we have for the estimate in H1, this line of estimate here, square root of epsilon times the H1 norm on the boundary. And now, if I just look at what happened near the boundary, in the boundary layer, the third term, because of the cutoff, drops out. So you simply have the derivative of u epsilon minus the derivative of u0. And so you have this term here, controlled by the derivative of u0 in the boundary layer. Then because u0 solve a constant coefficient, when we do have non-tangential maximum function estimate. So the volume integral in the boundary layer can be controlled by the boundary data. Because the boundary layer has a sickness epsilon, this generates the square root of epsilon, which just matched the second term. And that's all I want to say. Thank you. Any questions? Yes. Yes, one of my former student, Chang Xu, has written several papers on the subject. So his most recent paper actually deals with layer potentials for operators with low order terms. You probably can find on the archive, his last name is Xu, first name spelled as Q-I-A-N-G. So he can deal with the first order and zero order, operator with first order and zero order terms. All right, thank you.