 The second lecture on Tuesday, we established some convergence rate in H1 with a first-order corrector and also in L2, a first-order convergence rate. In the last two lectures today and tomorrow, we're going to look at another problem, basic problem in quantitative homogenization. That is the uniform regularity of solutions. So here, uniform means that the estimate we're looking for estimate with constant independent of the small parameter, optional. So today, we're going to present the method of compactness. We're using this compactness method to prove the large scale Lipschitz estimate. Tomorrow, we're going to look at another approach to the regularity problem, not by compactness, but by the method of convergence rate, something related to what we talk about in the second lecture. So let me just review what we have done. So here, we're working with a family of second-order elliptic operators in divergence form. So you have a parameter, optional, appear in the denominator of the variable x in a coefficient. The basic assumptions are this coefficient's matrix. A is real, bounded, measurable, and uniform and elliptic. And also, A is periodic. So more precisely, ellipticity means that you have a positive constant mu. So the matrix is positively definite, bounded below by mu. We're going to put an upper bound by mu to the power negative 1. And one periodic means that it's periodic with respect to the integer lattice. For the notation simplicity, we're going to just work with the scalar case, but everything we do today holds for second-order systems in divergence form here. OK, so the question we're looking at is, what is the sharp regularity of solutions we can prove? So we start with, we look at this equation, the operator L, epsilon, defined in the first page. Omega and F are fixed. And we ask the question, what is the sharp regularity of the solution uniformly with respect to epsilon? So here we look at something we did in lecture 1. We show that if you have a linear function that's just the xk, and you add a scaled corrector to the linear function, you actually end up with a solution, an entire solution in the whole space. So we take this solution, and you take the derivative. Here the first term is a linear function. You take the derivative. That's just a constant. And for the second term, you're going to have an epsilon to the power negative 1 comes out, which cancels this factor epsilon. So you don't need to worry about this first term just constant. So what is the best regularity you can have uniformly with respect to epsilon for the second term? And it's clear that unless the corrector is 0, you cannot expect this derivative to be more than just being bounded. It's not even uniformly continuous. So the best regularity we can hope for is Lipschitz's regularity. There's no C1 alpha here. And so there's no C1 alpha estimate here. So the classical theorem I want to talk about. Actually, we're going to present a proof today using compactness method. It is due to Marco Avalinda and Fang-Hua-Lin back in 87. So let's look at an assumption here. So here I come up with the upper indices just indicate the results are holds also for systems, second order systems in divergence form. Assuming this matrix is elliptic, periodic, and the header continues. Domai is C1 alpha. So you solve a Dirichlet boundary value problem. The right hand side is F. The boundary data is G. And the theorem states that the gradient of the solution is uniformly bounded up to the boundary by a constant C. The most important thing here is that C is independent of epsilon times the LP norm of the right hand side plus the C1 sigma norm of the boundary data. So this theorem is sharp in several fronts. As I mentioned, there is no C1 alpha. This is a elliptic estimate up to the boundary, the best you can hope for. But it's also sharp in terms of assumptions, omega being C1 alpha. In general, even for harmonic functions, there's no elliptic estimate on C1 domains. The assumption P greater than the dimension is also sharp. You can see that from the representation of the fundamental solution, even for Laplacian. And C1 sigma norm is also sharp. I mean, you cannot have a C1 here. That's not true even for harmonic functions. So that is the Lipschitz estimate for Dirichlet problem here. So there is also a theorem for the Neumann problem. The assumption is the same. The coefficient is elliptic, periodic, and here the continuous domain is C1 alpha. So instead of Dirichlet problem, we're going to look at the Neumann problem. So the equation is the same. And here you have the co-normal derivative specified on the boundary. Let me just remind you the co-normal derivative of associate with this operator. It's n dot with the coefficient multiplied by the gradient. So this is just the flux in a normal direction. That's the co-normal derivative of the function here. And so here you have the L infinity norm of the gradient, the Lipschitz norm, uniformly bounded by the LP norm of f, P greater than the dimension, plus the C sigma norm of the Neumann data. So here, because the Neumann data is already posed on the derivative, certainly you don't put the C1 sigma data as in the Dirichlet case. You have a C sigma. Sigma is any number greater than 0. So the theorem was proved originally by Kenick, Lin, and myself back in 2013. And there are additional assumptions that the coefficient also being symmetric. Later on, Scott Armstrong and myself removed that condition and also proved this theorem and the previous theorem in a case of almost periodic case. Using the method, we're going to talk about tomorrow here. All right? So I will spend most of the time showing you the compacted method in a very simple setting. That's the interior estimate. So this is due to Avalinda and Fenghua. And in this theorem here, in the previous two theorems you are assuming the coefficients actually are here to continuous. But it's only used in the small scale. So this theorem, you can think, is a large scale estimate. There is no smoothest assumption on the coefficient. This theorem is true for bounded measurable coefficients. So everything in scale can be scaled in a unit ball, B1. You have a solution for the simplicity. Let's just say the right-hand side is also 0. You can have a similar estimate. You have a forcing term there. So I want to simplify everything set in here. So the estimate is the following. So if you average the solution, the gradient of the solution squared on the ball of radius r, r is any number between the parameter epsilon and 1. We cannot go down below epsilon. And that will require regularity on the coefficients. This is bounded by the average on the ball of radius 1. And the constant c is independent of dimension. And the mu is the ellipticity. Again, let me just emphasize that no smoothest assumption is required for this theorem. This is a large scale theorem. Are there any questions? So let's see why this is called a Lipschitz estimate, a large scale Lipschitz estimate. It turns out once you have this large scale estimate, you combine with the classical local estimate. You will have the Lipschitz estimate in full scale. This is a very simple blow up argument. Let's say you put an additional assumption that the coefficient is huge or continuous of all the lambda here. Lambda is anything between 0 and 1. And so we're going to show that whenever you have a solution in a unit ball, then the gradient at the center of the ball is bounded by the L2 average of the gradient on the ball of radius 1. And the constant now will depend on the dimension. The ellipticity constant mu and this lambda and m in the Hüter continuity assumption. So how do you prove this? Well, we're going to do this in two steps. First of all, we know that if the parameter epsilon is greater or equal to 1 half or some number greater than 0, then the estimate here is classical. That is because if epsilon stay away from 0, then the coefficient you have here, x over epsilon, is uniformly Hüter continuous in epsilon. So the classical Hüter estimate will give you this directly. So that's the first observation. So now we can assume that epsilon is between 0 and 1 half. So first of all, we do a blow up argument and we rescale. So you're looking at, so you have a solution. You have a solution L epsilon, U epsilon equal to 0 in B1. And then you rescale. I'm going to call the function U epsilon of epsilon. So you change the variable from x to epsilon of x. So you blow up the solution. And then you also multiply epsilon to the power negative 1. And then you see that if you take the gradient of V, this epsilon cancel, this epsilon come up, cancel this epsilon to negative 1. You simply have a gradient of epsilon evaluate at epsilon of x. The thing is that this function here will satisfy the equation with parameter 1. That's how the operator rescales here. So V will be a solution for the operator L1. I mean, that's a classical operator. And we simply apply the classical C1 alpha estimate, or Liff's estimate, to V you end up with this inequality. The derivative at 0 is bounded by L2 average on ball radius 1. And once you have this and you just change the variable back, this inequality is the same as this one. First of all, the left-hand side are the same. And by a change of variable, the right-hand side also are the same. So that is the Boolean inequality here. So this step used the Hewler continuity, but it does not use the periodicity. This is a small-scale estimate. So the large-scale estimate come up in the second step. That is the average on the ball of radius Epsilon is bounded by the average of, you can put a gradient here. It doesn't really matter, because they're more or less equivalent by cartography on the ball of radius 1. That's the second step here. The red inequality is precise. Well, not this one. It should be this one. You take R to be Epsilon here. So that is the proof of the full-scale Liff's. Once you have the large-scale estimate, do you have any questions? Because the cartography, they are more or less equivalent. I can go to B1 half by the gradient, and then you get rid of the gradient by cartography from B1 half to B1. Thank you. There? Yeah, whenever you have a local estimate, then it's OK. So this is just completely separate the small-scale and the large-scale here. The large-scale does not use any smoothness of the coefficient here. All right, so now let's just try to prove this. How do they prove goals? The compactness approach, so here, we're going to have to work with a class of matrices, not just one matrix. So I'm going to call m of mu to be the class of all matrix, d by d, that it's one periodic. They have the same period and also same ellipticity and mu to the minus 1 half. So in all this calculation, mu is fixed, but the matrix are changing here. And so the compactness we are doing here, we are working with all the operators of this form. That epsilon is positive, and A is any matrix in the class of m of mu. So we're not just working with one matrix, but we work with family of matrices. And A is also allowed to change, as long as it's in that class of m of mu. So this is the reason that in the first lecture, I actually insist that we're working with a sequence of matrices instead of one. Although if you only work qualitative theorem, you only need one matrix. So what is the theorem? We have proved this in the first lecture. So let's say you have a solution. You have a sequence of solutions in domain omega, and epsilon k goes to 0. And A sub k is in the class of m of mu. And we're going to assume that this sequence is bounded in H1. So the H1 norm is uniformly bounded in k. And then we can claim that up to a subsequence, there is a subsequence which was still denoted by U sub k, so that U sub k converge to U0 weakly in H1. That's not surprising, because any bounded sequence in the Hilbert space has a weakly convergent subsequence. Not a problem. The interesting part, the key part is that the limit actually satisfies a equation with a constant coefficient here. So U0 is a solution of divergent form operated in second order with constant coefficients. So that was proved on Monday in the first lecture. And we're going to use this theorem today. Questions? OK, so let me just briefly review how do we prove it. So first of all, we can find a subsequence which converges in H1. We can pass to another subsequence so that a k hat converges as a sequence of a constant matrix that converges to another constant matrix. That just compacts in the Euclidean space. And the limit satisfies the same elliptic, same lower bound, but the upper bound may be different, but only depends on the dimension and the mu here. And then we use a Divi-Kalema or Tata's testing function method to show that the flux also weakly converges in L2. Once you have this, then you know the limit U0 will be a solution, because you can just multiply by a test function, integrating omega, and that's what you need. So that's what we done in the first lecture here. So now let's see how we use the compactness to prove the interior regularity. So this using a compactness and iteration argument here, there are two steps here. So let's see how we set this up here. So everything is happening in a unit ball. We can always scale into a unit ball, center, or origin. And we're going to fix a number sigma. Doesn't have to be small. Any number between 0 and 1. The claim is that there are two other small numbers. One is called the epsilon 0, another called theta. Epsilon 0 between 0 and 1 half, theta between 0 and 1 quarter, small. Which depend only on d, mu, and sigma. It does not particularly depend on the matrix. It only depend on this elliptic constant, mu. So that whenever epsilon is less than epsilon 0, whenever you have a solution in a unit ball with some matrix a belongs to this class, then you have this estimate. So what is this estimate here? OK, so here, this is the average on the ball radius theta. The solution minus its average on the ball radius theta. And then you subtract this very much like a Taylor expansion. It is a Taylor expansion except that we introduce the correctors into this formula. If you don't put a corrector, that's just the second term, which is a linear function multiplied by the derivative. But instead of a derivative evaluated at some point, we put the average of the derivative. There's a good reason for that. We like average. It's because point-wise, the derivative does not converge strongly. So you measure this difference. And at a scale sigma, it decays like sigma to the power 2 plus 2 times sigma. This is the same sigma here times the average of u epsilon squared. So this is the setup here. So it's a first step. It's called a one-step improvement. Once we get this done, we're going to iterate this argument, this estimate, from scale 1 all the way to scale epsilon. This is iteration here. I'll tell you now why we need to have this corrector in this place. Certainly, without a corrector, this is still true by the same proof. However, when you go to the second step, if you want to iterate, you realize that then this term, the constant is always a solution. But if you only have a linear function, it's not going to be a solution. So one important reason here, well, there's much deeper reason than that, is that this guy is a solution to the operator L epsilon. And therefore, you can iterate this inequality repeatedly up to the scale epsilon here. You'll see how this is done in the second step, in the second lemma. So for now, let's just look at how this lemma is proved. Any questions? Yes. There's no, we're working with an interior estimate, so I don't need. There's no omega here. Later on, we're going to talk about the boundary leptes toward the end of the lecture. This will give you interior leptes. We're working with a ball, and then to ball radius 1, and then you have an estimate to a ball radius 1 half, so it's an interior estimate. OK, so how do you prove this? You prove this by contradiction. Compactness, just argue by contradiction. All right? So I'm going to choose a theta. How do I choose theta? I'll tell you later. There's a reason. So we're going to assume that there's no such epsilon 0 for this chosen theta. Suppose there's no epsilon. We're going to find some contradictions. So if there's no epsilon, then any small epsilon k will not be true. So what that means is that there must be a sequence of epsilon k, which goes to 0. There must be a sequence of matrices belongs to this class. There must be a solution to in the ball radius 1. So you can normalize that the L2 norm is 1 so that you have the reverse inequality. So we claim that this is, sorry, where is it going back here? So we claim that this quantity is less or equal to theta to the power of 2 plus 2 sigma times this integral. And here we're assuming it's greater, maybe it's strictly greater than theta to the 2 plus 2 plus sigma. The integral there is already normalized to 1. So here I want to mention that there's a corrector, chi k, but this is the corrector for the matrix A super k. For this matrix, we have a sequence of matrix. So we have a sequence of correctors in this. But they're all uniformly bounded in L2, though. Because the mu is fixed. And then we're going to simply take the limit and I will give you the contradiction. So how do we do it? Well, first of all, by cartography, you can go to a smaller ball because this u sub k, the L2 norm is 1. So it goes to a smaller ball. It will be uniformly bounded in h1. And therefore, you can find a sub sequence. So that it converges weakly in h1. We can also go to a sub sequence so that u sub k converges weakly in L2 of b1 because the norm of u k is 1. And then we go to another subsequence. Subsequence of subsequence, the effective matrix of a k also converge because they are bounded. They must be converged. The subsequence is a converged subsequence, OK? And this allow us to apply the compacted theorem I stated earlier. That is, the subsequence will converge weakly in h1. So here, let me see, you have let's just look at what this subsequence satisfies. The u sub k squared b1 is 1. And the average of u sub k minus u sub k minus x plus epsilon k chi x over epsilon k should be chi k here. And this is a vector value function thought with the average of the gradient of u sub k squared is greater than theta to the power of 1 plus 2 sigma, OK? So we take the limit. Well, k goes to infinity by the limsub property of upper semi-continuous, they call b1. The limit will satisfy. Well, I cannot get 1, but I know it's less or equal to 1, OK? You take the limit here. This will u sub k converge in l2 while in b theta. So you've got the limit b theta v here minus. We converges to give you this average minus x, OK? This sequence here is uniformly bounded in l2. So this term goes away because if epsilon k goes to 0, you still have an x left. And this also converges because of the weak convergence. So I mentioned earlier that you do not want to put a point-wise, evaluate the derivative. You want to average it here. So b theta, then you have a grade square greater or equal to theta 2 plus 2 sigma, all right? That's what I have on the bottom line there, OK? And that will be a contradiction. And why is that, OK? So the reason that's not going to be true is because the limit v is a solution of a constant coefficient. And a constant coefficient, if the right-hand side, is 0, then there is c infinity. They are analytic, but all we need is c2, OK? So let's just check that. So you have this copied from the last line. And then we use Taylor expansion. This is true for any function v. It doesn't have to be a solution to bound this by the second order derivative in L infinity norm. In a b, theta is less than one-quarter. So I'll just move on to one-quarter here, OK? This red inequality is the c2 regularity estimate for elliptic systems with constant coefficients. So you have the L2 norm of the two-divivitive.