 Once you have this part down for the PDE, we want to move on to the quantitative theory. There are two basic questions in the quantitative theory of homogenization. One deal with the convergence rates. This is what we're going to talk about today. I will present two very simple case which leads to convergence rate in H1 and also in L2. On the third and fourth lecture, we will discuss the problem of uniform regularity for homogenization. That will be done on Thursday and Friday here. So first of all, let me review what we have proved mainly last time. Again, this is the setup. We're dealing with a family of elliptical operators, second order in divergent form with a small parameter appeared in the denominators of the variable. Optional here is presumed to be very small, say somewhere between zero and one. And there are basic assumptions I want to make. First of all, we're going to deal with real bounded measurable and uniformly elliptic operators. So A is real bounded measurable and uniformly elliptic. For the most part, we do not need any regularity today. So bounded measurable will be suffice. And also we're assuming that A is one periodic. That is, the matrix is periodic with respect to the integer lattice, okay? So here are the precise assumption, ellipticity, meaning that the matrix is positive definite with positive lambda mu and also bounded above by mu to the negative one. Periodicity with respect to integer lattice, that is if you change, you put a Z here with integer coordinates, then you have the same matrix. And I probably forgot to mention that all the results we will talk about in this lecture holds for elliptic systems. So although you do not see upper indices, that's just for simplicity, you can see that all the techniques has nothing to do with the scalar case, okay? The same proof carries over directly to the systems of elliptic systems, okay? So here is what we prove in the first lecture, just looking at the Dirichlet problem. So you have, again, A is elliptic, a periodic, omega is bounded, elliptics. And let's say you have a weak solution to a Dirichlet problem with right-hand side F in H minus one and the boundary data in H one half. And then by lax milligram, you have a weak solution in H one, unique. And the theorem states that as the epsilon goes to zero, the solution converts weakly in H one and therefore strongly in L two. And also the flux of the solution, which is just a matrix multiplied to the gradient converged to a hat, that is the effective matrix we introduced yesterday, times the gradient of the limit, U zero. And this is a weak convergence in L two here. And furthermore, this limit function is a weak solution to the boundary, to the Dirichlet problem with the same data but for the operator L zero. And that is the effective operator with coefficient A hat here. So we prove this in lecture one using Divica lemma. I hope I got the main idea across there. Otherwise you can read the proving detail in the lecture notes. So today we're gonna look at the problem of a convergence rate. So we know the solution converges in L two strongly. So it makes sense to ask, what is the convergence rate you have for the difference, U epsilon minus U zero measured in L two more. Okay. Also, in general, unless the corrector is zero, the solution does not converge strongly in H one. However, we can ask, suppose we allow to subtract another term, which we're gonna call first order corrector. You're gonna see some form there later for this function V epsilon. Then we can talk about convergence, strong converges in H one. So what can you say about converging in H one with a first order corrector? So these are the question we're gonna address today. In a very simple setting, and they're more elaborate argument in the lecture notes. But to get the idea across, I'm only gonna look at a very simple case. Okay. So here's the theorem. One of the theorems. Again, the matrix is elliptic, periodic. I need some smoothness assumption on the domain omega. Let's say C one one, this condition probably can be weakened. Then the L two norm of U epsilon minus U zero is bounded by a constant, which only depend on a dimension D. The elliptic constant mu and the domain omega times epsilon times the H two norm of U zero. So this will be one of the theorems we will prove today, which actually give you, this is the sharp convergence with a power one here for all the convergence in L two, in L two space. So that's, we'll come back to this theorem later on here. So the next I wanna present the theorem in H one. Actually we're gonna prove first the convergence rate in H one and come back to L two. Okay, so we'll ask, if you're gonna subtract something, what should we subtract? So we come back to the two scale expansion. Briefly mentioned in the very beginning of the first lecture, that is if you're gonna do a formal two scale asymptotic expansion, try to find figure out the right formula for the effective coefficient for the corrector, what will be? So it turns out we end up with U zero in the first term, which only depend on X, that's not depend on epsilon. The second term is of this form. So you have epsilon which is U zero of X plus epsilon times chi K times the derivative of the first term in X K and there are more higher order terms. And this chi K, the so-called corrector is a solution to this equation. So we're gonna scale back to one. I have A i j. Let me say I use K here if I want to know. So G actually. So d chi K d y j is d d y i k. Again the summation convention is used all the repeated indices are summed. i i j a j and K is fixed. So K here is between one and the dimension d here. So this is the equation and we also of course want X K, chi K to be one periodic. For otherwise you just simply can just take a linear function that give you a solution but that's not a solution we want here. So that is the formula for the for the correctors. And I mentioned that I suppose you have a matrix, the column is divergence free then the right hand side will be zero. In that case your corrector is zero identically. But in general you do have a corrector here, one periodic function and that is the function appear here. So this somehow suggests that if you want to subtract some corrector you should put this form. So this actually is what are we gonna do? We're gonna introduce u epsilon minus u zero epsilon times the corrector and then there's some other stuff. So I wanna explain why we wanna do this here. One of the reasons we need to somehow smoothen out this term is because the corrector in general may not be bounded. So we know it's a solution here and so chi k is a h one function on a torus and in a single equation case, in a scalar case it follows from the George theory that this is chi k, the corrector actually is a hewler continuous. But if you deal with elliptic systems in higher dimensions the function, the weak solution may not be hewler continuous, may not be even bounded. So one of the techniques to deal with that problem, to handle that problem is to smoothen out this derivative of u zero in x k. And I'm also going to put a cut out function a the epsilon here, it's a cut off on the boundary. So it's a function between zero and one and it vanishes in the epsilon, actually three times epsilon neighborhood of the boundary and away from the boundary by four epsilon and the function is one. So it's a cut off near the mostly one omega but dies down near the boundary here. So you'll see why do we need this. So that is the function w epsilon here. So that will be the first all the corrector will subtract. So with this, we have this theorem. Again, we do not need any smoothness assumptions on coefficients, bounded measurable will work. And A is elliptic, periodic, omega is bounded and elliptic domain. And then this w epsilon and you take this norm in H one, it's actually in H one zero because u epsilon, u zero have the same boundary. I'm dealing with a particular problem here. And also because of cut off the third term here equal to zero on the boundary, actually vanishes near the boundary. So this function w epsilon actually is in H one zero and it's H one zero norm is bounded by constant times the square root of epsilon times the H two norm of u zero. So that's for any epsilon between zero and one here. The constant only depend on dimension, the ellipticity and the domain omega, omega here. And furthermore, if the corrector happened to be bounded, like in a scalar case, you actually don't need the cut off. You also don't need a smoothing as epsilon. So you can just take the corrector as you have in the two scale expansion here. Two scale expansion here. I wanna say, I mean you may ask, why do you get a square root of epsilon instead of epsilon? So you can somehow see from the left hand side and you see that near the boundary, I mean on the boundary u epsilon, u zero, they have the same boundaries so the difference is zero on the boundary but the third term is equal to epsilon times chi of x over epsilon times the derivative of u zero on the boundary and so it's H one norm, somehow equivalent to the H one half norm on the boundary. If you take a one, H one half norm on the boundary of this guy, you're gonna generate a square root of epsilon on the denominator. So cancel out this epsilon, you only come up with a square root of epsilon. So in other words, this estimate is more or less sharp. So these are the two theorems we will try to prove today here. Okay, so let me explain the smooths. This is a very simple operator, a smooths in operator. You simply convolute, this is just approximation of identity here. You convert this operator as if it's just a convolution with approximation of identity. Rho is a supporting B zero one half has a integral one and so that's a very simple one here. Okay, so what do you see here going back to this is that as far as, I mean the homogenization deal with large scale properties. So whatever you do below in a small scale the homogenization simply cannot see that. And so whenever you do, if you introduce some smoothing at the each scale, the homogenization process simply doesn't care. So that's one of the reasons that the smoothing operator works really well with in this situation here. Okay, so just remember that the s-optional the s-optional is a simple convolution with the, waste. All right, so the next thing I wanna, oh, there are some properties which tell you which we're gonna use in the proof of these two theorems. One of them is that if you look at this smoothing operator and then you multiply by a function of g of x over epsilon and you measure in LP norm, you wanna estimate in LP norm here. And of course, if g is bounded, you can simply take this g out replaced by the L infinity norm of g. But as I said, the corrector may not be bounded and so that's the role played by the smoothing operator. So this tells, this estimate tells you that you don't need to worry about. As long as this function g is locally in LP uniformly, you are okay, you have a LP LP estimate. Okay, so again, the reason is that because this is a smoothing or the average at an epsilon scale, anything below epsilon, the operator cannot see that, okay? So you have this estimate here, okay? So here is the open sets O and in the right hand side, you have to expand the set by epsilon. So that's the definition here, okay? So the proof is quite simple. First of all, by Hewler's inequality, you can bond this smoothing operator to the peace power by this integral of f to the peace power and U-eptional x minus y dy. And using the factor that's the integral of rho-eptional is one, is one. Okay, and then you simply use the Fubini theorem. So you multiply gx over epsilon to the both side and you integrate on the whole space and then you change the order of integration and then you get what you need it. That's all. So very simple proof, it should be given in the lecture notes here, okay? All right, the other things, our need is that this approximation. So you have a smoothing, you'll be concerned with the difference of the smoothing with the function f itself, a measure in El Pinon. So this is also a very simple lemma and you can prove this again using, okay, so this is by, I think by Minkowski inequality here. So you, and then you write the difference in terms of gradient and you replace, use the Minkowski again and replace this term in this integral and that's it, okay? Because rho is supported in a ball of rails one half and so you have a y here and therefore it's after a scale then you're gonna have epsilon come up. Actually the constant, you can take this to be one. Okay, again the proof it can find in the lecture notes in detail, okay? All right, so the next thing I wanna talk about is this flux operator. I'll explain why it's called a flux operator. I mean flux corrector, sorry. I mean flux corrector. So this is, by definition is a matrix, it's a matrix, B-I-G-A, B is A, the matrix A of Y plus A of Y multiply to the gradient of the corrector. So the corrector is a vector value function. You take the gradient, it becomes a matrix, so the second term is a product of two matrices. And the third term is A hat, that is effective matrix. So we simply call this three terms by B here. So if I write out the components, you see that it's B-I-G-A-I-G-A plus A-I-K, K-thumped minus A-I-G-Hat, okay? So these are the two key properties of this matrix, of this B here. So you'll see where this is coming from. So in the first case, so we claim that the mean, this matrix, by the way, is one periodic because A is one periodic, the corrector is one periodic, derivative is one periodic, and A hat is a constant. So B, and this is zero, why this is zero? So if you just look at this here, you see that this actually is simply equivalent to A-I-G-Hat equal to the average of A-I-G-A-plus A-I-K, and D, sorry, this is a D-Y-K-high-G and integrating in Y. So in other words, that's simply the definition of effective coefficients, okay? So the first equation here is simply the definition of A-I-G-A-hat, that's the first one. The second equation, the second equation, that is if you take the divergence in the first indices, D-Y-I is zero, this is equivalent to, of course you take the derivative of the last one because zero, but otherwise you will end up with D-D-Y-I-A-I-G-plus A-I-K-D-I-G-D-Y-K equal to zero. And that is just the definition of a corrector, that's how the corrector is defined. So these two equations actually capture what we have done yesterday. I mean, one gave you a definition of the effective coefficient, another gave you the definition of correctors for chi here, okay? So this will be a play in central role in the following computation here. Okay, so with this property, we can introduce the flux corrector, that'll be called phi K-I-G. There are three indices here, okay? So here's a lemma, there is a function phi K-G in H1, periodic, such that if you take the divergence in the K variable, again the index K is summed, you get B-I-G. So that's one property. The other property that'll be also important for us is that among these three indices, it's skew symmetric with respect to the first two. So here if you interchange the index K and the index I, you got a minus sign here, okay? And this will be important to us, okay? And if you know the corrector is Hewler continuous, that is the case of a scalar equation by the George Nash estimate, and then we can prove that this flux corrector is actually bounded, but in general, it will just be a function in H1, all right? Okay, so how do you construct such a function phi here? Okay, we do that by solving a Laplacian equation on the torus. So you have, you take a B-I-G and I wanna have a function so that Laplacian F-I-G equal to B-I-G. So we solve this equation for each fixed I-G between one and D, okay? So here we're looking for a solution F-I-G in H2. This equation is solvable if and only if the right-hand side has mean value of zero, and we just happen to have that right here. That is the first property. So we need this property in order to find the function F-I-G. Otherwise you don't have a solution. You have to subtract the average, okay? All right, then we define phi-I-G, phi-K-I-G to be the derivative of I-G in Y-K minus the derivative of F-K-G in Y-I. So you see that I interchange the index I and the K, and K and I. So by definition, it has this property automatically. If I interchange K and I, you got a minus sign, okay? So the question now is how do we know if I take the divergence in K, we got a B-I-G here, okay? So let's just do some calculation here, all right? So I have phi-K-I-G is D, Y, K, I-G minus D, Y, I, F, K, G. Okay, so I'll take the divergence in K, so phi-K-I, if I take the divergence here, I take a derivative in Y-K, so K sum, the first term is just a laplacian, and the second term, I'm gonna interchange the order of derivative, take a derivative in K first, F-K-G, all right? So by definition, this is just equal to B-I-G, but we have this actual term there, and I claim that this guy actually has to be a constant, and when you take a derivative, you got zero. So subtract zero, and you got B-I-G, okay? So how do we know it's zero? Well, we're gonna have to use this equation here. So what you have here, you start with laplacian, F-I-G, equal to B-I-G, and you take a divergence in I, so that the laplacian of D-F-I-G, D-Y-I, is D-B-I-G, D-Y-I, you got zero, because of this equation. So this function here, I sum in I, that's the same function I have here, except I have a G in this here, it's a harmonic function, but it's also a periodic function. The only thing that's harmonic and the periodic is a constant by Louisville theorem. So that the last term here drops out, give you zero. When you take a derivative, you got zero, and then you have a B-I-G there. So you see these bones of these two properties needed in order to construct the flux corrector. All right, so let me just say a few words. Why do we call this flux corrector? So this will actually come up in this calculation. We look at this U epsilon minus U zero plus, let's not worry about the boundaries of the correctors, just subtract this term in this two-scale expansion, and you calculate the flux of U epsilon, which is the gradient multiplied by the matrix, minus the flux of U zero, okay, the gradient of U zero multiplied by a hat, and subtract this term B, which I defined earlier here. In the calculation, you'll see that you're gonna end up with this term plus a term that has a factor of Epsilon. So this goes to zero here. So if I take this in L2 norm, you can measure, you can actually just take the L2 norm of these two terms, and you calculate the gradient of W epsilon. So the first three terms are here, the last term is going here, which actually is the same term as I have pretty much the same as I have here. All right, so you'll see that this flux corrector appear in the derivative for B, and this corrector here appear for the derivative for here. So this function phi played the same role as the corrector chi, except that one is for the gradient, another is for the flux. So it's pretty reasonable to call it a flux corrector there. That is, that's okay. All right, so this is the main lemma, we're gonna need to prove the convergence rate in H1, in H1 here. So W epsilon is the same as I defined before. You have Epsilon minus U0, and subtract a term which is modeled after the third term, so I need to do some smoothing, I also need to do some cutoff near the boundary, and otherwise it's the same here. So omega delta is just a layer near the boundary of omega. So the estimate is that if I integrate the matrix A of X over Epsilon times the gradient of W Epsilon times a test function, the derivative of a test function per psi, I can estimate this by the H1.