 Okay, so how do you do this? Here we're gonna use a kind of a core, so-called core area formula. Try to generate a generic boundary condition here. So first of all, you have a cartopelite, so that the integral of the gradient L squared on B3 half is bounded by the integral of the solution L2 norm on B2. That's just a cartopelite, okay? And next, and I claim that there is always a T between one and three halves, so that this surface integral, this is a volume integral in the cartopelite, but now I wanna get something on the surface integral. On the boundary of ball of radius T is controlled by the right-hand side, possibly with a different constant. The way to argue is that you do this by contradiction. Suppose no such T exists, and then you simply integrate in T and by the core area formula, you're gonna arrive at an inequality which is in contradiction with the cartopelite. So such T always exists, okay? And then we're gonna use this generic boundary data to generate the approximation function W. So in other words, we're gonna solve L0 W equal to zero. L0 is the homogenized operator with a boundary data Uy on the boundary of BT, okay? So in other words, that W actually is the homogenized solution. For your each one, but in the ball of BT here, okay? And then we're gonna apply the convergence rate theorem on the ball of radius BT, which contain B1, so you can certainly from B1 to BT, and the right-hand side is I remember you have each one mole of the boundary data here. This is H1 mole on the boundary of BT, and by the choice of T, it's bounded by the L2 norm of Uy on the ball of radius B2 here, okay? So that's one of the reasons that when we're looking for a convergence theorem, we want to have H1 mole on the boundary, but no higher than that, okay? Any questions about this? So this gave us the proof of this theorem here. All right, so yeah, we already did this. Boundary regularity. So how do we extend this idea to prove boundary elliptics for both deli-clerc and norm of data here? So first of all, let's just localize to the boundary here. You need to be just a little careful because even though our domain is C1 alpha, you do not want to choose the tiny the play as your coordinate plane because that will destroy this structure, but of course you can always choose a coordinate so that the boundary can be represented in a graph, okay? I can have a phi of zero equal to zero, but I cannot insist the gradient of zero per psi is also zero, okay? So that's, so it's you have, so it's boundary data, you fix boundary here and you somehow looking at this is your dr and the boundary, this part will be called delta r. This is the boundary of omega here, okay? All right, so let's say we have a local solution near the boundary, right hand side of F in B2 and with boundary data F on delta two. Delta two is part of the boundary of omega here. So for the D-crab problem, we, so we're gonna, the first term is the same, instead of ball, you're gonna look at this D of T, which roughly is the intersection of ball with omega, okay? That's just the same. You still take the infimial, this is also a linear function here, there's no corrector as you point out, see that here, no corrector involved, just a linear function here, okay? If you have a right hand side, you can put this in here, the LP norm, the LP average multiplied by T, that's the right scale, and then you have to put up terms which involve the boundary data. So you wanna subtract a linear function from the boundary data, take the tangential derivative and measure this in L-infinite norm, and then you also wanna measure the C-sigma norm of the tangential derivative and properly rescaled. So that is the setup, okay? And then we're gonna go through the same scheme, and then of course you'll have to prove some convergent rate and approximation in this context, which can be done. I just tell you that it's, yeah. So I think in the next notes, I did this for the Neumann, and you can figure out the stuff yourself for the DDClan. All right, so let's go to Neumann, okay? So for the Neumann problem, you again the setup is the same for the first term, you look at the difference of your solution and the linear function and take the infimium among all linear functions, the L2 average. Second term is the same, then the last two term is a little tricky here. So here G is the Neumann data, Neumann data of your Epsilon. So somehow you will need to subtract the conormal derivative of the linear function with respect to the homogenized operator. So here is not new Epsilon, but new zero. New zero is the conormal derivative for L zero. So otherwise it's the same. You have the right, this is a cell infimium norm, and this is a C sigma norm here. That is the setup. And in the actual notes, I have all the details for the Neumann problem, and you have all the detailed proof presented there. Do you have any questions? All right, so I want to mention some other stuff in periodic homogenization. In the last section, I covered of the lecture notes, I covered the Cosmo-Zigmon theory. So in particular, we can prove this theorem here. So you have, so this is something called a W1P estimate. So assuming A is elliptic, periodic, here we're doing all the scales. We're gonna have to assume in some smoothness, but for W1P estimate, the so-called VMO is enough. The closure of a continuous function in the class of BMO. And domain C1 will be suffice, and you solve this boundary value problem with boundary value of zero. You can also put a boundary data G here in some best of space. And then you have this estimate. This is called a W1P estimate, but it's the same thing as you see for Cosmo-Zigmon estimate for Laplacian. So this is also the Cosmo-Zigmon estimate for the operator L, y. The key is that it's constant C is independent of y. It depends on the parameter, it depends on exponent P, omega, the multiplicity and BMO norm of the coefficient. So there's a kind of a revised, real variable argument, kind of improved version of the classical theory presented in section five of the lecture notes there. Okay, so obviously I wanna mention is that a lot of this has been extended to the almost periodic setting. I'll just spend the last 10 minutes if I have. This, tell you a little bit about what is almost periodic homogenization here. So up to now, the assumptions we have on the coefficient is that the coefficient is elliptic and periodic. Okay, so can we go beyond that? Of course, of course you can. So here is a case, almost periodic case. So first let's just define what's almost periodic function. There are different classes of almost periodic functions. They all are coming from by taking the closure of the class of trigonometric polynomials with respect to certain norm or semi norm. Okay, so here you start with trigonometric polynomials or the trigonometric polynomials. It's just a finite sum of exponential functions. Okay, so the coefficient a l can be complex but the exponents here lambda l needs to be in the Euclidean space r d, okay. So here I give you a function which is almost periodic but not periodic. So you have a function sine two packs as a periodic function period one. And this is a function with period of a square root of two. And because square root of two is irrational, some of these two periodic functions is not a periodic function. There's no common period. But it is a trigonometric polynomial. So in particular it's a almost periodic function. Okay, so the smallest class is called a uniform in almost periodic. It's also called almost periodic in the sense of both. Okay, and these are the class of functions obtained by taking the closure of this class of trig polynomials with respect to the L infinity norm, the uniform norm. So in other words, we say a function is uniformly almost periodic if it is the uniform limit of a sequence of trigonometric polynomials, okay. That's the definition here. There are larger classes of almost periodic functions. All right, so here I show you the graph of this almost periodic function. So it's oscillating just like a periodic function, but the graph never repeat itself. Okay, so it's, there's something, the word almost periodic is that somehow that you can actually match to graph to arbitrarily small. Okay, so the qualitative theory for almost periodic homogenization was done a long time ago. It's late 70s and early 80s. So you still have the converges of a boundary value problem. Okay, so A is elliptic and almost periodic. Omega is elliptic, so many solve a Dirichlet problem. And you let it, if it goes to zero, then the solution will go to, will have a limit weakly in each one. And moreover, the limit is a solution of a boundary value problem for an operator with constant coefficients. Okay, so the proof, it can be done using a divi-core lemma, but it requires a different approach as we from a periodic case. In the periodic case, one of the biggest problems goes beyond, when you go beyond periodic is that you do not have a corrector. Remember the corrector in the periodic setting was obtained by solving a periodic boundary value problem on a periodic cell using lax milligram. And now you do not have a periodic cell to work with if your function, if your coefficient is not periodic. And so for almost periodic function, there's no periodic cell. How do you find the corrector? Actually, the corrector may not even exist in this almost periodic setting. Okay, so nevertheless the qualitative theory can be done using some abstract setting involve a while's decomposition there. All right, but to do the quantitative theory, people introduce something called approximate corrector. So the approximate corrector define is, you look at this equation here, okay? It's the same equation for the corrector equation. However, with an actual zero-order term, T here is positive, large, okay? So the idea is that without this term, you are not able to solve this equation. Certainly not with lax milligram because the binary form is not gonna be coercive in each one in the whole space. Okay, so the idea is to regularize the equation. And in this case, it's that you actually add a zero-order term to force the coarsivity. And then you can generate a solution. Actually, you can generate a solution which is uniformly locally in H1, satisfy these conditions. And then the idea is that in almost periodic setting, also in a random case, is to study the behavior of this approximate corrector as T goes to infinity, okay? So, and that's, okay? So furthermore, in almost periodic setting, there's some important quantity I wanna just mention here. Is that how do you quantify the almost periodicity of a function here? So one way you can do that is to look at the translation of your function. So if you have a periodic function, you translate by a period, the graphs matches perfectly. Okay, here what you do is you take a translation in Y and you wanna match that with a translation in Z by Z cannot be too big. I look at the L-infinity norm of the difference. You call this function a row of R and R is large, okay? So in particular, that if A is periodic, this function will be zero for R greater than a period. But in general, you can actually prove that if A is uniformly almost periodic, if not only if this function goes to zero as R goes to infinity. So we're gonna use this function, row of R, to quantify how far your almost periodic function is from periodic function in terms of the decay of this, okay? So this actually, this function is used to prove in a theorem regarding the Lipschitz estimate in almost periodic setting. So this is a theorem due to Scott Armstrong and myself. And so under the condition that the row of R decays faster than some negative power of log R, we have a Lipschitz estimate for Dirichlet problem and also Lipschitz estimate for Neumann problem, okay? The condition on A is, it's not gonna be optimal, but it's an interesting open problem to see if you need any condition at all in the almost periodic case, okay? So I want to mention that the smoothness is not a problem here. You can assume if your coefficient is infinity, but uniformly almost periodic and then elliptic. The question is, do you have interior, even interior Lipschitz estimate? That'll be a problem to think about, all right? All right, I wanna thank you very much for taking this course and good luck.