 Thank you for all of you that came this morning. I'm sure that you are looking forward to the end of this torture, mine and in the school. So today I want to move into the subject of homogenization for moving interfaces. But before that, I would like to finish with two things left over from the evolution of France, the motion of France. And so one is to recall this definition I had last time because I did it very fast, the geometric definition for a moving front where I'm not going to write anything in math. We want to say what it means for this set here to move with a certain velocity. So we go inside, we find and we claim that that set moves with this velocity if it satisfies this inclusion principle. Namely, if you start the same motion a little bit faster and you move a smooth set in the interior, you stay inside. And so this, and one can develop this in an axiomatic way, figuring out for what kind of velocities you can have this inclusion principle. But this looks to you a little bit as a maximum principle. It's the same proof as a maximum principle because you let this thing go a little bit faster. And if you remember the proof, let's say, of uniqueness for the hit equation, for example, of the maximum principle, that's what we do. We look at that function and we try to create a maximum. And so somehow here, we do the same thing. But on the level of the set, we let it move a little bit faster. And the first time that if they intersect, the first time they intersect, there will be a problem. But you just need to run it for very little time for which you have a smooth solution. So it's a very flexible definition because if nothing else for the asymptotic problems we discussed, it reduces everything to justifying expansions for smooth interfaces. So the second comment I wanted to make is about the role of the traveling wave. All proofs I did for problems of that form started with the traveling wave. So all the problems I had, eventually we were doing u epsilon x t equals q. And let's simplify, distance from the front over epsilon. And the q was, since we always assume that we have a well of equal depth, satisfies that. That's what I mean by the traveling wave. So everything used that. And when you do it in the pure cubic case or whatever, you see that the speed doesn't really depend on the traveling wave here. The speed of the traveling wave is 0. So in the end, you just get mean curvature and you don't see anything else. So the answer to that problem was mean curvature. That's what we got here. Then we notice that when we put an x here, mean curvature, I didn't do the complete calculation plus something that would have been ax dot n. That's what we got. I didn't do the computation. But if you remember there was, in the formal computation, there was a term that looked like dy. And this was going to be q dot and integral. So this will give you some normal. And finally, when we add it at the end, and again, I show you how it works, although I did not do the calculation, we got in the end anisotropic in curvature flow, meaning trace of some matrix that depends on the direction and the gradient of the principal curvature. And I did not do the calculation, but this thing depends strongly on the underlying pulsating wave. So it's a natural question to ask, do we really need the traveling wave to do something? I wrote down the beginning of the proof. I never computed this data n. So do we need the traveling wave, or we can do the proof without the traveling wave? And the answer is, so if I pose the question, do we need the traveling wave? All the proofs I know use the traveling wave. Technically, as I said the other day, you may think of it as playing a role to make the transition from being near minus 1 to near 1. And that's where all the action takes place. But you may argue that I can do something without using the traveling wave. For example, let's take this problem. So I put here some smooth C, oscillatory, and then I put 1 minus u epsilon square. And I put that there because if you look at the traveling wave that corresponds to, this is nothing else by hyperbolic tangent. So that's a nice function that makes a transition from 1 to minus 1. And you say, OK, I do that. It plays the role of this. And then from there on, I do the proof. But this will not work in this case. And when I say it won't work is that you are not able to control the error terms. And you would expect that that be the case because you have this oscillatory term and there is no way to cancel it. So for those of you who know a little bit about homogenization, you should think about the traveling wave as being a kind of corrector. And if you have it, as it is with pulsing periodic case, it's fine. If you don't have it, there is a problem. Which means that here is a big open question. The same equation, x over epsilon. But now instead of periodic, I make it random, stationary ergodic. So there is a huge literature about that on homogenization in random environments. I've done some work there, Felix and many others. Qualitative, quantitative, the theory has advanced. And this is an open problem. And why is it an open problem? Because there are results by Zlatos that says that if you are in dimensions bigger than 3, there are no, and I put it in quotes, traveling waves. Because there are some kind of traveling fronts, but no traveling waves. And I put it in quotes because the notion is, so what he proves is there are not even generalizations or the obvious generalization of the traveling wave, which may look like approximate corrector for those who know what I mean. And the example is not crazy. And actually it's strange that typically in all these problems that involve randomness, as you increase the dimension, things become simpler. Because the random thing has more room to move around. And so that is a little bit counter-intuitive. So the question is, in that environment, what happens? If you have more than 3D or even in 2D, how do you homogenize? I don't know how to do that. I think that's one very difficult and open problem. And you may be prepared for having some negative answers. So for example, there is a work by a dear look-house bundle, you said? I don't remember. And someone else, someone else that I'm not dismissing it because I don't know, who looked at this problem. Let's use f here. That's random. And let's call it v. Where g looks like it has some set instruction. And they showed that for that problem, there are environments. There is randomness here. Loosely speaking, the result says there exist environments. There exist environments for which f is bigger than some f star. There exist stationary solutions on u and omega. And this is a constant. So what they're proving is there exists an f star. And as I said, the result is a little bit, I'm stating very loosely. They work with a particular, when I say there is an environment, I mean there is a class of g's for which there is a critical f star. And if you are above or below this f star, there exist stationary solutions to the problem. Stationary, not in the sense of probability, time-independent solutions. When you have a result like that, that implies there is no homogenization. Because since homogenization is long time behavior, long time, long space behavior, it means you're blocked. You may ask, what does this have to do with what I was doing there, all right? So if you think of the graph, so think of the original problem. And there is a next, I think here, as we will see in a minute. Let me put a next there just in case. Think of the, now the variables are getting stupid. W, W, W. So think of the original reaction diffusion equation we have. It's one or the other, I don't remember. It's the other way around. Yeah, you're right. If f is very large, it should move big. Yeah. And so let's go to our original problem. If you look for a graph-like solution, the problem will be, and if the graph is very small, no, I'm talking about mean curvature motion also. Anyway, I wanna do something else. So keep that in mind, that there may be obstructions of homogenizing in random environments. And in terms of the reaction diffusion equations that I discussed, this is an open problem. I suggest for the students, if you want to work on this problem, perhaps the best thing you do is forget how the rest of the stuff was done because it doesn't go through. You have to do something new, right? So don't, several people have tried to take the proofs I showed you and extend them and they are dead on arrival. But now that I described that, why, okay. So that's about the fronts. That's the, another remark I want to make is about the two-phase BMO that Felix was talking about and show you the viscosity proof to that, or at least the key step in the proof. So the scheme was you were solving the heat equation at time H and some characteristics set. And then you were taking, I do my threshold zero instead of a half, okay. So in Felix's lecture, the threshold was a half. I will do it zero. The half was there because it has to do with the fact how planes move and planes don't move if you go within curvature. So if you start the heat equation with a plane, the answer solution is a plane and it depends on whether you normalize things to be zero, one, or minus one, one. Okay, so for me, the chi will be characteristic of some set, minus characteristic of that. So you solve that for H and then you make it, again, plus minus one by looking at the sign and this is the algorithm. So chi at n plus one is the sign of the S of H is the S of H of chi is the solution of ut minus Laplacian u equals zero u at t equals zero equals chi. So now this connects a little bit with the way I described at the beginning, all these motions, but saying that it's enough to look at functions which are plus minus one. And I don't want to go through the actual proof, but I want to give you the key idea of the proof because the actual proof will have to look at the lim soup of that and the lim inf and then test it and then test functions to use the viscosity definition to do it, but since I never did that, I want to show you the basic computation. And the basic computation has to do with the fact that if you are at that setting, let's say you are at the place at some x t that sets that chi n plus one at x t is one. You're at that point. This is going to be one minus one. So let's say you are at the point that the limit is one and let's say the chi n plus one is one. Then, and we're going to assume also that chi n has an expansion. So we have a point x zero t zero has an expansion, a second order expansion. So all these things are assumptions, but they are not difficult to come down to that because we will never work with the chi n. We will always look with something that touches the interface from above and below. So I will have here an inequality instead of equality, but I will have smooth functions. So if we assume that this has a second order expansion at the point x zero t zero minus h. So I'm moving by h. Then I will write down the expansion. So I have one because I'm evaluated at the point one. It will be equal to sh and here I will do the expansion. I will put x, I pour that as a constant. Okay, I write the expansion here. I put x n at x zero t minus h plus the linear part plus the quadratic. Let me write that as a gradient of x n at x zero t zero minus h times x minus x zero plus a second derivative. And since this is a formal computation, I don't care about the next terms. You will look at that and say, well, it's just crazy. I mean, if we knew the thing was smooth, we knew how to do the proof, but the whole idea of this course solution is it reduces to the places where the chi is smooth. So you don't lose anything, but if I do that expansion here, for reasons I'm messing it up right now. So this is a linear operator. You apply to that s times this is zero because this is a plane and planes don't move. And or a half, whatever now, I messed it up. Okay, zero. This will cancel this because it's a constant. It will come out and it will, yes. And then the only thing you have to do is to go to compute the heat equation, the solution to the heat equation to d square x and this is minus h, x zero t minus h of this quadratic form here, dividing by one over square root of h. So what you are doing is you are having a, you are applying the heat equation to a quadratic and you try to find the relationship you get. And that's where the answer comes. So that's the calculation. So that's an aside comment to get rid of this. And now I'll go into homogenization. I assume that there's not time to give you a collection of homogenization. Homogenization, the way I looked once at Wikipedia to give a popular definition of homogenization and it's hard to find one, but so I decide to write my own definition which is the process of making a soup. You take a lot of ingredients, you throw them all in, you stir them very hard and you get something and when you do this something, you don't know really where it came from. You cannot undo it, but you make any other thing look the same. The second definition that I had of homogenization, this was something that happened in my life. I remember with my, when we were enrolling the kids in school, in Montessori school, someone asked the director of the school, what kind of holidays do you celebrate? Now that in the US is not Italy or Greece. I mean, there are an uncountable number of religions. And so the director of the school said, well, in order not to offend anybody, we only celebrate Valentine's Day and thanks you. So that is another example of homogenization. You homogenize all religions and you come down to Valentine's Day. So that's a non-mathematical definition, but the way it usually is described is the following. You have a problem, this is like an equation now that depends on many variables and on some scale. So two ways to think of homogenization. How does the epsilon A come? Either you have something very inhomogeneous and you look very close to that with the microscope and you magnify and lens and try to see what happens or you have something here of order one and you keep going further and further away and you try to see the average picture. Right, so if you know that there is a road that is very windy, but if you get the opportunity to go in space, which I haven't, I bet that if you see it from there, you're not going to see the minute up and downs, right, left. So the way you describe that mathematically is you introduce a scale which measures either how far you go or at the micro scale where you are and you come up with the problem of the form f epsilon u epsilon equals zero, where this is the solution of your model, f epsilon is some equation and then the claim, of course, you would like to let epsilon go to zero because that's where all the action takes. That's how you go far away or you're looking extremely close and the theory would like to say that there exists another equation, f bar. So the theory says there exists an f bar. You don't look at the solution, says there exists an f bar. Such that if you solve f bar of u bar equals zero, then u epsilon converges in some sense to u bar. That's the theory. In a nutshell, that's what homogenization does. And then once you have the limit and I would call that the qualitative theory, then of course you are interested in figuring out properties of the f bar because that's the key thing, that's the homogenized operator and also a rate of convergence for that. And then you go into the quantitative theory. And a lot of times this method and that method don't necessarily go together. You have to develop different tools. Okay, so let's go now to fronts and I'm going to consider two kinds of fronts. Actually, one kind of front. So I want to look at the fronts which are v equals some delta trace of, I could put here delta theta of n and x, dn minus a of the direction nx. And the PDE version of that is something to be delta trace of, so let's make that one now. So it's already complicated. And you want to see if you have some anisotropy here and then you want to see what is the behavior for a long time, short, a long time, large space. Which leads to the scaling, the hyperbolic scaling x over epsilon t over epsilon. So you define the function u epsilon xt equals u x over epsilon t over epsilon. And at the level of the PDE, that gives rise to the following homogenization problem. So that's the equation we have. We want to see what happens as epsilon goes to zero. And the claim will be at that level, on the level of the velocity or at that level there exists some h bar of p which is positively homogeneous of degree one and therefore can be used to, yes. Yes, no, no, no, it mean curvature flow is always for me trace of identity minus, I don't look at the graph, I look at the whole thing. I don't look at graphs. This is the gradient. No, the mean curvature doesn't, it's not applied. No, this is, I have done the scaling I need. It's the second derivative, so, okay. Mean curvature flow of u is trace identity, sorry. No, no, no, I understand, thank you. The delta is there because I want to write the results for both cases whether there is curvature or not. So for me it's a bookkeeping. So there is nothing magic on the delta. The delta will be either zero or one in what I'm doing. I just want to say what happens. And now you are asking the question. So there exists an h bar, that's a parenthesis here. There exists an h bar like that. Once it's homogeneous, it means it can define a velocity such that u epsilon converges to u bar and u bar solves that equation. And that's the homogenized equation now and that's the homogenized velocity. I'm sorry for being more disorganized than usual today. But there was one more thing I wanted to talk to you about front propagation before I go to that and which has to do with the following thing. I'll write a bunch of things and I want to connect it with Brownian motion, the thing I said the first day. If I look at that reaction diffusion equation, I'll make things very simple, the original one with f cubic, then we learned that this gives you mean curvature flow. If I look at that problem, so in other words, if I have my potential and I'm just disturbing the wells by order epsilon because that's what this does, then the claim is we add mean curvature flow plus universal constant times c. Alpha is some universal, I mean universal for the model. Doesn't alpha is independent of c. I didn't prove this but actually it's included in all these proofs I show you. So you go a little bit further, okay, let's not rewrite it and you realize that it doesn't matter if c is a function of time because then you're going to get here a function of time. Okay, and if you put here a c epsilon and the c epsilon converges to some c and the c here is continuous, c is nice, I'm sorry, then you're going to get again the same answer. And so now I'm asking the question, what happens if c epsilon converges to something but the limit is just continuous? If there is any justice, since you're going to have a model with c epsilon there, the limit should be alpha plus ct but if this is just continuous because of the method of the proof, you have a problem. But let's continue along this line. So let's say what if the c epsilon is some smooth, if c epsilon is a smooth approximation of white noise. B is a Brownian motion, db is a white noise. So what you're going to get? Then you expect to get the flow that moves by mean curvature plus alpha db. If you remember that's something I mentioned the first day. And that's how you get that, okay? And as a matter of fact, I think I mentioned that the first day, an open question was, what happens if from the beginning you put Brownian motion here? What can you do? And the result there is that it will not work. This is too violent to do the job but if you put some nice approximation, you can pass the limit. So that's how you get this problem that I mentioned the first day. Okay, so now I go back to my friends. And I want to give you a bunch of results here. Where am I here? About this homogenization. So unfortunately, to do it, I will have to use facts that you either at this point either know or you don't know. Because there's no way to give you theorems. So after you do that, so this was this scaling, the long-time behavior, large space. And then I want to do the quadratic but let's stick with that for now. So let's start with some negative results. So the first one. So the negative results, I start with delta zero. And for starters, so I will start with X of eruption periodic. I want to discuss some examples at the case of delta zero. And in which case it will mean that I have a front that moves with normal velocity A and X over epsilon. Or if you like, I'm trying to homogenize this equation. I have messed up the science. My science changed from line to line, but I'm like that. If you follow what I'm doing, the sign is not going to be an issue. I don't think this is going to disturb you. Okay, so let me start with them. So what happens at the limit here? If A is strictly positive or negative, so if A has a fixed sign, positive or negative, there is homogenization. So things average out well and when I say there is homogenization, I mean there's a limit like that. So things work out well. And that will work for A periodic and almost periodic. Okay, let's drop this, let's keep it periodic. So if A has a fixed sign, there is homogenization. So what happens if A, and what does it mean it has a fixed sign? It means that the front really goes out all the time because it has positive normal velocity, so it only goes out and if it's negative it comes back. So what happens if A changes sign? The answer is more complicated. For those of you who know homogenization is what happens when you destroy the quercivity? How you do it? The thing will go or come back and maybe doing that many times. So the answer is very illuminating. It's that sometimes you don't have homogenization, sometimes you have and sometimes you have homogenization. It's like what the famous applied mathematician said recently in a conference, he was presenting some of his results and he said, well, this theorem sometimes is true and sometimes is not. So along this line, sometimes you homogenize, sometimes you don't homogenize. Now, it's clear what homogenization means at least mathematically, at least to me. It means you start with a problem, you let epsilon go to zero, you get something new. Non-homogenization, it's not so clear. I mean, it means that this doesn't work but it may not work at many places. So the non-homogenization is split then afterwards in two parts, at least in the kind of theory I'm describing. That either the U epsilon such as epsilon goes to zero along subsequences or whatever go to different limits and therefore they cannot solve a homogenized equation or there is what people call trapping. Trapping means if you put an epsilon there that the solution converges to the initial data or if you think of this moving far out that it gets stuck by something, that's the word trapping. So non-homogenization is that and let me give you some examples. So with particular examples, I take this problem which is made up because this is, I'm not going to claim as much to do with fronts but let's say I start with this problem and I want to homogenize this equation. I want to let epsilon go to zero. What do you think will happen? I'm not going to do to you what Felix said not to move from there but I'm going to go to the other thing I always tell the students when I teach, when I stop and I ask a question, first of all I expect that the answer to be simple. Like let's say I don't expect the answer to be 10 to the 16th over whatever, it will be something simple and it has to be something you can do and it has to be related to what I just said. So I'll give you all the hints. So I want to see what happens to that as epsilon goes to zero. What do you think will happen? This is the x-axis. Formally, these are the points where the cosine is zero. When the cosine is zero, this doesn't move but these points come closer and closer as epsilon goes to zero. So the solution converges to u zero as epsilon goes to zero and that is what I will call trapping. Goes to that. Now let's take this example and make it a little bit more front looking. So I will keep that. So here I'm in R2. I have a function of x, y and t. And so I put the only variable and at zero, this thing is y. So in other words, I have this plane. The space is x, y. I have this plane and I let it move by this velocity. What do you think will happen? Now that's harder. Think about it. I did the trapping. I told you whether there's no homogenization. There will be the trapping or the limits aren't going to be the same so it will be a sad situation. So what happens here is you're going to have two solutions. This would be a maximal solution. It will be like y plus t and this will be a minimal solution, which is y minus t. And all the limits will bounce around here. So here's an example of no homogenization. And the reason is that there are some points where you're going to be going up and some points you will be going down. There's no limit, yes. Any limit will satisfy something like will oscillate. I'm not saying that any function like that will be but it will oscillate between these two. No, because the actual theorem is that the lim soup of epsilon in the way I define the lim soup of x t is of x y t is y plus t. And the limit is u epsilon of x y t equals y minus t. There is no limit, no. And now, you may say, look, what is failing here is the fact that this thing starts oscillating so widely so that you cannot control. So somehow that you may claim this has to do with the fact that there are no ellipses bounce but the u epsilon's are not uniformly ellipses so they will go like crazy. So that's an example of no homogenization. Let me give you another example where both things can happen. So, all's like that. In an it's ball, I have a, which is negative here negative here, it goes from one down to zero and this again is a circle and it's one here. So, I have this array of balls where, and I'm looking always at this problem with such an A. And this is an infinite array. So, let's start with a plane here and start moving. So, I take this as an initial condition on the problem and I start move. So, the plane outside the balls, this has velocity one. So, it keeps going like a plane till it hits the balls. Now, let me give you a dramatic example of that. It's like having, suppose that A's were not, suppose you had some obstacles on the floor and you put some obstacles and you have, this is not what's happening here but you take an elastic band and you move it on the plane no friction, whatever, you just move it. My friend there has the other end and we are moving and I think goes smoothly and then there are some obstacles. The moment I hit the obstacles, the thing will start bending because we are moving and so, when you hit there, it's going to slow down. Remember now, the picture has this extra. So, once it comes in here, it will start slowing down because outside still goes with velocity one but inside it goes with velocity between one and zero. We keep going, the picture is qualitative, not quantitative. And it hits the places, at some point the place will hit zero. So, we are in the case where the front comes like that and it's this way. Then what happens? This point cannot move, it gets stuck. Zero means it doesn't move the same way we said that whenever the cosine is zero, it doesn't move but the other sides keep going, keeps pulling. So, if this is an elastic band, what's going to happen? It's going to break at some point, huh? Maybe we need to pull it all the way down there but at some point it's going to break. However, if this were an electric signal that is going and you are there and you want to see what comes from here, what you see is that instantaneously, after that moment, the thing reconstitutes itself as a plane and starts going. So, in a heuristic way, you can say it leaves something behind but the leftover again looks like a plane. And that was a very hot thing when I was trying to see, I was trying to see, maybe even when Sigurd and I were a little bit younger that had to do with Star Trek, I don't know how much Star Trek you watched but if you watch Star Trek, there was this clocking device at the Klingons head that could make their ship invisible. And the think of the Klingons ship, space ship has been this thing here. If you are there, you never know it's here because the only thing you see is the signal coming through and in more modern world that thing can be a plane that flies very high and they don't want to one of the stealth planes. And that's the phenomenon you see. Of course, the issue there for them is not to prove what we write down, the issue there is to construct a material that has this behavior. Okay, so what is the mathematical result? The mathematical result is that now I'm doing this thing scale because it's easier to put the epsilon. That the U epsilon's converge weekly to a linear combination of two things. And these two things will be and U bar is something that moves. And H bar is the effective speed that corresponds to A plus, the positive part of A. And theta is the mass fraction of A zero, very negative, I mean of A negative. So this is the case where both happens. No homogenization, you have trapping or... So this is like what I said, you leave something behind. How much you leave behind, the portion, how make a portion of how much negative you have. And then the rest of the stuff move. And you can justify that because it's a weak convergence. So I'm not calling about it, it's weak convergence in an infinity weak star. And that's a theory. Okay, so now I want to describe the case. I want to, so I state on the periodic case, all these things hold true in the random case, whatever means, whether that means, but one has to introduce an additional assumption that which is that the map P to AP over P YP is convex if you want to go random. And this example, you can do it, but then it has to have to have make an assumption because the important thing here is not the periodicity, it is that in one direction, there is some unbounded that you can go forever. And so in percolation, what you do if you were random, you have to assume that like in percolation that there is a component that goes all the way out. But there are no, anyway, so let's not, I'm not gonna have time to do that. So now I want to go, I want to reconstitute the delta. So now I take delta one and study the same question. Now this problem is not well understood. And anything can happen. So let me write down the PDE now. And for some reasons, I don't remember. Now the A becomes V. And I want to ask the same questions. Do I have homogenization or not? And there is no, there is that start becoming now more complicated. In dimension one, yes, but the problem is fake there. But let me take it out. So what would you like to say? Let's forget the dimension. If V, let's first case the case that V doesn't change sign. That's the first case we're going to look. Because it looks very much like, after all, this is like the first case here where everything was nice and the only thing I put is I put a multiply by epsilon, mean can write your flow. So that's the closest you can do. And here the answer is weird. In 1D, the answer is yes. In 2D, the answer is yes. So the 1D doesn't have even a reference because it's a computation that you do it and you see it. In 2D, it's a very nice result of a Kaffarellian mono. And in 3D, not 3D, I'm not going to go in grade or equal three. The answer is no in general. When you give a no, you always have to say in general. There is no result that says it doesn't homogenize. And that is due to Kaffarellian mono. And a provisional yes. And yes, if you make an additional assumption, which is that the gradient of V is that the V has to be bigger, D minus one times, it's either one or two here. So you need to have this assumption, which is so beautiful. And that's due to Kardalierge against myself. Notice that in 1D, that satisfied. So there is this discrepancy. If you want homogenization, you need to assume that or stay in one or 2D. I mean, this is satisfied in 1D. So if you want it 2D, you don't need to assume anything. Just positivity. And that's the result of Kaffarellian mono. But 2D is very special because you work with curves in some sense and many things happen when you work. The moving thing is a curve. The counter example, when you see a counter example, when I say no homogenization, I should have called trapping because what Kaffarellian mono did is they constructed an object that is a stationary solution of that problem without the epsilon. Take the epsilon, there's a stationary solution to that. And therefore, if you put it there and you start with the front that is behind it, it starts moving, we'll get stuck. Okay? Now comes this condition. Where does this condition come from? In that condition, it's very sharp. This thing is necessary and sufficient to have leapsage bounds, U epsilon. Okay, let me write it here. Star is necessary and sufficient leapsage bounds on U epsilon, which are independent of the sup norm of the, to get leapsage bounds that depend on the sup norm of that is simple. To get bounds that are independent of the sup norm, very important. And the reason you want that is because you will scale this problem all the time. So you want to get something that doesn't change as you're scaling it in space, the solution. And it is an necessary and sufficient condition to do that. And under that condition, we are able to do this in the periodic case. If you think of the random case, I think one under some conditions you will see what you need to have in order to work. Two is not known in 2D, whether you need anything or not. And under this condition now, in all dimensions, Cardalege and Amsterdam Cardalege proved homogenization in the random case. But they need the leapsage bound. It's critical in their proof to have the leapsage bound. So it's under that condition that it can be done. And you see there's a gap in between. So now the next question. As you see, I'm not proving anything today. I'm just waving my hands. Everything I'm doing is under that scaling. So a question that is left open here is, for example, is this really necessary and sufficient for more than two dimensions? Or more than two dimensions, three and above? Or after all, what is the threshold between that and that? And if you think what's happening here, that was a little bit related to what I tried to write here, but it's not here anymore, about this result of Look House and Deer. And because in their case, there is an obstacle and things don't move. So in 3D, there is an obstacle and the stationary solution. The results went the other way around. First we proved that and then they found this. We didn't know that there is this, okay. So now I want to look a little bit at the case of what happens if V changes sign. I want to write down a very particular result for that. That looks very specific. And then I will tell you what this very specific condition means. I learned that from a very nice paper of Barl, what the condition meant from a very nice paper by Barl, Cesaroni, and Novaga. But first I will write you the result. So we're going to go 2D. The U epsilon will be again X, Y, T. And we're going to look at the problem. So this is 2D, be careful, of course. But now I'm going to go here VX over epsilon, DU epsilon equals zero. And I'm going to assume that V is ellipses. And these are all my assumptions. I mean, all my, this is the problem on 2D. So when I write X over epsilon, this sees it only in one dimension. Cesaroni, the X doesn't see the Y. Okay, when I write that, there is no Y dependence here. Okay, remember, this is a problem has X, Y, but there is only X here. And let me introduce a little bit more notation because it will be important. Let's call V prime, the anti-derivative of V. Let's call V, then the mean of V will be V1 minus V0. I'm on periodic and the unit interval, periodic and one periodic. I will call V lower bar, sorry for all that, but you're going to see they come in. Min of V, I will call that V, the max of V. V lower bar, the mean of capital V. V upper bar, the mean, the max of capital V. And now I'm going to give you a bunch of results. Again, this is together with Cardale Gue and Lyons in that paper I mentioned. If the average of V is zero, this is less than two, and it's trapping. The proofs are technical and based the fact that because we are in 2D, we can reduce the problem in 1D and work there and then you deal with ODs. So here we have traffic. We are in Italy, so my question is to you till I finish stating the result, what does this mean in terms of the vector field V? There is a interpretation for that thing, which I didn't know and I learned it later on. So second condition, let's write a specific example. If I take V of X to be theta, sine two pi X, and I take theta bigger than two pi, and then there is no homogenization because we have to a maximal limit. Okay, I do that and I take V and I take U epsilon at X, zero T, zero X, zero Y, X, Y at zero to be Y. So I'm exactly in this picture. I have a maximal and a minimal limit. It turns out that the vector field that has this, if you have this property that is contained in the following property, if I have a function G, that integral of G, Y, D, Y, exists delta such that this thing is delta less perimeter in the unit cube for every E in cube. In the cube is the unit cube here. Turns out this is the meaning of that condition. Okay, so no homogenization and now is there a positive result? And the answer is yes. So now if I take a non zero V and either two is less than the mean. So since this is not zero, this has to be either positive or negative. If it's positive, you are asking this, of course, this will be greater or equal, but you want this to be less than two. So these two floats around all the time or the opposite, there is homogenization. And you have information and there is information about the homogenized H bar. Okay, so there is homogenization. This is a very special model because I mean where all these numbers come from. So let me tell you where is this coming from? The result, at least this and that, one and three, are in some vague sense equivalent to looking at your d y prime equals minus a one minus y square square root plus V and seek periodic solutions which stays strictly less than one. So as long as you can do that, you have homogenization. When it fails, you have trapping. And the conditions for us came up in trying to construct this, which has to do with the corrector. So in that case, the corrector is like that, so we had to do this. So I mentioned to you the result of Barles, Cessaroni and, okay, now a couple more results and I will finish. This is a provisional result tells you some class of, one class of problems where you can homogenize. Not much is known in this generality. There is another type of result by Deer, by Karali, Deer and Yip. V can change sign, U epsilon is graph like. So if for some reason, notice also here to have homogenization, I mean, okay, so if for some reason, somehow you are lucky and the U epsilon starts graph and remains graph. If it starts like a graph and remains like a graph, which is a big if because this has to do with regularity, then there is homogenization. Notice that that condition doesn't appear there. Although the V, I mean, there's no condition like that, which of course forces the V to be. So how does this go? If the problem is graph like, one of the speakers mentioned that that once you have the graph PD, you are at the uniform elliptic. If you can get a Lipschitz bound, some kind of Lipschitz bound, the problem is uniform elliptic and then you can gain regularity. So if you have some bound, may not be very good bound, the problem is uniform elliptic, therefore you can use elliptic regularity to improve the bounds. So you need some bound to do it. And so they were able to do this and they managed to get bounds under this assumption. And now the question is, when is this graph like? So it turns out that the solution is graph like if V is very small. So if you have a very small normal velocity V, if you start the graph, you stay graph. So the final result of Nicholas Karali and Yib is for a graph like initial data and for very small V, there is homogenization. But what happens in general is completely open and everything I'm saying here is periodic, nothing random. The random case is a little bit related to this problem I described earlier. Okay, so I have three minutes and to go to the other case, so which is the parabolic scaling. If I do that and I look at my general problem, I will get mean curvature of flow in epsilon and now I will get one over epsilon and let's make life simple. Nothing is really known about that. I put the nothing in quotes because there is something that is known but to write it down, I have to say other things and you're going to start throwing things at me if I go over time. Nothing is known, there has to be at the limit you would expect, I mean, formally I can tell you what the answer is but it will take me half the blackboard to write it. And there are many issues. So you would expect that at the limit you will get unisotropic mean curvature of flow. That is the answer you expect if you just write it down and say I can do it. The presence of one over epsilon requires to, you have to do for those who know homogenization, you have to find two correctors. The first corrector is like a minimal surface with velocity V. So the first corrector will amount to solving the problem minus mean curvature V, let's say the level of Pd. Mount right trace as a mean curvature flow of W plus A of Y dW plus P equals zero it will amount to finding a periodic solution to this problem. Or if you were to write W bar as W plus P dot X and if you were to write this thing say geometrically what it means is you want to find a minimal surface because that will become one. Not a minimal surface, how do you call that? You want to have mean curvature equals one. And you want this thing to stay bounded. And that's in some sense, that was the, in some sense, this is what Caffarelli did with the Lavre when they did the minimal surfaces but they did it for divergence form equation. And of course they were doing it for graphs. And this is what they have done but now to translate this thing. And now to go to the next level. So this is a first step. This is somehow to say that the problem doesn't have in quotes a ballistic behavior, that there is nothing that goes to hell. That in other words, that the one over epsilon doesn't take over. Okay, and then you need to build a second auxiliary function. Let's call it Z. But to construct that, which will solve some problem and to find the effective velocity you have to do some kind of a first mode alternative for something but I don't think this is the issue. The issue is that this something, the equation for Z will see derivatives of W. And there is no way in hell that you have enough regularity to write down these derivatives. Okay, because it's the second corrector. So it's an equation. It's like doing, in other words I'm making an expansion like U bar plus epsilon W X over epsilon plus epsilon square Z X over epsilon plus higher order terms. And I'm finding the W and the Z. And when I write down the equation for the W will be this. This is the one that's going to kill the one over epsilon. And this is the second equation but the second equation includes. And conceptually is what I did for Mincaravaggio if you think. I mean, this was my Q, this was my P. All right, when I did the fronts. Okay, I don't want to confuse you more and almost nothing. I mean, I don't think there is a clear analogy of the, I forget there is someone in Italy who has perhaps done, but I'm not 100% sure the analog of the Caffarelli, I'm not sure, in the random case. I mean, the fact there is it's very easy through gamma converges to pass to the limit and find a candidate for the solution. The difficulty is the regularity. Okay. And so the result in the random case should be there is a solution to that problem which asymptotically tends to planes. What Caffarelli did allow the proof did not prove, I'm sorry, but Caffarelli did not find the solution to that. They found the solution which is bounded, an almost solution which is bounded. So they found the corrector that is when you subtract the linear part is bounded, but they didn't find it. They found an approximate corrector that asymptotically stays in a slab. And to do the random case you have to do something like that and then becomes a mess because, oh okay, so that's a little bit for the experts. I think I told you, I'm sure I forget things, but I told you everything that came to mind for this, I mean, there are other problems, but I don't want to throw, I mean, that's enough. If you want to work on this, these are enough problems to keep you busy. So thank you very much for your attendance, all these five lectures. And if you ever need anything about this, just drop me an email and I will send you, I will try to answer to you. Okay, so thank you.