 I need to introduce you to a little bit of calculus of variations. It's going to be very simple ideas, but sometimes, of course, that we have to require from your side a little bit of a stretch of imagination. So I will not be precise. I can't, otherwise it would take hours. All right. So then, this guy here is a function. You can think of this one function. If you prefer, you can take it to be that guy. And for the moment, there's an object that, as you can see, depends on epsilon. There's a parameter, epsilon, say positive. And there is a weird guy in the second term. The first guy is just the derivative. So we'll talk about what that means later. But really, what I want to tell you is a funny story. So we are in 1755 when the mathematical community has discovered not that long earlier a new type of problems that has to do with minimization of functions. So you fix, say in this case, you fix the parameter. Absolutely you want to find a minimizer of this expression. So you want to find the function u that makes this guy run inside the smallest possible, the smallest possible. So these kinds of problems were actually introduced in the second half of the 17th century. But almost 100 years later, Euler was still working on trying to formalize the field. So we are in 1755. This is a model term, but really what I mean is you want to minimize that kind of stuff. So he gets an envelope from Lagrange. Lagrange is 19 years old. He knew that Euler had been struggling on that for some time and tells him, you don't have an idea. I know how to do this. So the approach that Euler was taking was very much geometric, actually. I don't know much about the details, but he's been struggling for some years. All right. So then he gives just a very simple idea. He says, well, you have this thing. This is very complicated. But if you were to take, say, if you were to look for a minima or maxima, so critical points of a function of one variable, what you would do is you would just perturb the variable. So you would just take the limit of the ratio f of x plus h minus f of x over h. You have also the notion of directional derivative. So why don't we just consider this function? He says, OK, why don't you look at that guy, you take the derivative in time, and you say equal to 0 and see what happens. So this is essentially a revolution. It's a revolution because it turns a very complicated problem into a cut one problem. So let's see what happens in this case. In this case, there's an integral over omega of epsilon squared over 2, and here you've got ux plus dx. So just plug in and distribute the derivative here. What we want to do, as I was saying before, is we want to take a derivative of this guy in t, and then say it equal to 0. So then it's clear why I put the 2 here because it cancels with the square. So it's convenient. So you've got epsilon squared, ux is the explicit form. This is not really necessary. This is just the only thing we want is that the function w is differentiable. So then I didn't write it down earlier, but the goal was, again, so the goal, see what this is. This is just the integral over omega, epsilon squared. So we can say we say it equal to 0. So if you set, OK, let me write this one. If you do this 0, it's equal to that guy. And this was the thing, OK? This is fun, right? OK, so then what we can say is, well, we can integrate by parts, right? We have all the regularity we want on phi, and if you want on ux as well. So what you're going to do is you're going to get the minus sign here of epsilon squared, uxx, 5, as we call it phi, is equal to the integral over omega of minus epsilon squared, uxx. We haven't really done much here except for integration by parts, substitution, derivative of... But now, OK, look at this. We have an integral which is equal to 0. This has to be true for any function phi. OK, so this guy there is 0. So this argument, you can repeat it for all, which I would say is infinity compactors reported in omega. This is telling you that minus epsilon squared, uxx plus... This guy here that I just wrote is called other... If you see later, if you see later even in my slides, and you can somehow now define an evolution of that equation. So what am I thinking about? So I'm thinking about the following scenario. If you've seen... If you've seen PDEs, you're familiar with, say, the Laplacian and the heat equation, which is the evolution of the Laplacian. If you're not, then you can think of the following scenario. You know of u being a function of two variables, space and time, OK? And you can think of this equation as the representation of your solution at a fixed time. There is this concept that I'm going to define in a very, very non-precise way of reading. You define... This is a very loose definition. You take... You consider now u as a function of x and t, OK? You define v equal to minus the left-hand side that we had before. So I'm not going into details here. It would be a complicated problem. I just want you guys to stretch a little bit of the imagination and, again, think of the following fact. If you were in this context, say, and you were to look what happens at each time step, you would be looking at this equation, OK? All right? So we define this guy here, and this turns out to be what? x on squared u xx minus w prime f u. What I want to do is somewhat go from there to what an expression contains the energy. So there are different viewpoints on this matter, and another viewpoint would be, well, I have a model that is given to me. It's this one here. Can I somehow recover a structure of the type of that function that I defined earlier? And can I get some information about the solutions of this equation through the new way of writing? And once again here, you do something very simple. So actually let me write this guy as ut. So what you do is you multiply both sides by ut, and then you integrate. And you start playing with iteration by parts. So let's see what happens. So if you point, you're following. So multiply, take star, so multiply, create a space, not squared, ut. I've not played the same game as before. That is to say we want to integrate by part of this guy, and we also want to say something about that. Here I always get the science messed up, so let me just take a look at this, so if you integrate by parts, what you get here is a minus, so let me do it this way, integral over omega of minus epsilon squared, uxt, ux. So then, again, I'm integrating by parts and pushing one of the derivatives onto the second guy here. What you can think of, you have no fee to kill the boundary term, but you can think that this equation comes associated with zero boundary conditions. So I used to say u is zero on the boundary component. So again, if it's an interval, just is zero at the end point so that the boundary conditions never show. But then let's start looking at what we have here. So what is this? This is nothing but the derivative in time, ux squared, actually divided by 2. Take a derivative, you kill the a-ha, and then you take a derivative of what's inside, you get uxt. What is this? This is the derivative in time of the potential wu, right, chamber. Okay, but then we can just bring the dt outside. And then integrate in time both sides. Okay, so I want to erase that. So I want to erase this. I was saying, actually, let me just rewrite that as minus vp, the interval over omega, that brought the minus outside, w of u, and the interval was in dx, right? But then, okay, I forgot epsilon squared. That's epsilon squared. But now, if you remember, this is the energy that I wrote before. Yes? Sorry, can anyone guess what the boundary conditions are? Uh-huh. Uh, should that be 9-1 boundary conditions? Uh, sure. I don't know, because what do you think it is? You get, yeah. I don't know. Sure, I mean, yeah, you can consider non-boundary conditions, but let's see. Yeah, properly, yeah, because you're using the verses here. The one with the boundary, like the term on the boundary is u-s times u-v. Sure, yeah. Yeah, so the, uh, sure. Yeah, I mean, if you're, the thing is like, I'm considering omega to be a bounded interval. So in 1d, let me really just put u. I mean, you have no normal. Yeah, yeah, sure. So, so, okay. Anyway, yeah, so you have, you have a derivative there, and this guy is the energy associated to you. There is just a little difference here, because I started with functions that now depend on some time, okay? Before I started just, I started with functions that were functions of the space. So if you want just to be precise, you could say, okay, this is an expression that depends on some time, right? So all your functions, you are functions of x and t, okay? So I'm going to just express like this. But now if you see what's going on here, is that on the right hand side, you have a derivative of the energy, okay? So, alright, so then let's integrate in time. You integrate in time, say, interval, say, 0 for capital T, and what you get is the integral from 0 to capital T, the integral in omega, u t squared, e x t. This is going to be equal to, so you integrate and you get e epsilon u at 0 minus e epsilon u at capital T, right? Because you have a derivative in time there, right? Okay, so then this expression here is really very important. It's very crucial to my research because this is telling you something about the evolution of solutions. So what is the evolution of solutions? Well, it's just, you take u and you want to see how the derivative in time changes, okay? But this is telling you, look, if for some reason these two guys are, because the difference of these two guys is very, very small, very, very small, then these guys will be very, very small, as long as this time capital T is bigger, right? Because you will be integrating a very big interval and get something very small, right? So this has to be small. Okay, so that is something very, very, very important. And now we stop here with other random equations and associated things. And I want to just write down a scheme of things and see if I can make sense of it. So I will be using some other tools, well, I'll be using some other tools that have been introduced in 1975, actually. And these tools essentially allow you to say something about minimizers of these energy. So you want to look for critical points, you want to look for minima. Be a minima of something like this with the time on it, so on. So you will have essentially a sequence of these guys. So an additional question to ask is, what happens if I send epsilon to zero? Or in a more precise way is, can I introduce a framework where it makes sense to send epsilon to zero? And I can make sense of some sort of convergence of the epsilon to something else. And perhaps it's something else that satisfies another equation. So there is this grand scheme of things behind it. So somehow what we've done here so far is, we've looked at the grand equations, essentially that is to say, points satisfy these equations, minus epsilon squared. The function u depends on epsilon. I haven't stressed it out, of course you need to find a solution of the equation, there's a parameter, the solution will depend on it. So if you want, you can just add an epsilon there just to remember. And then I said, well, there is a connection between that thing and the gradient of squared. What I told you before is that we would like to be able to say something about these u epsilon as epsilon tends to zero. Now we're not going to details of anything or anything, but I want you to trust me here in the sense that there exists a framework in which you can make sense of a convergence between objects like that. And this convergence brings with it the convergence of the minimum. So as soon as you define it, you have some properties to check, everything goes smoothly then if e epsilon converges to e naught, you can say something about the sequence of minimizers. And there's a connection here, right? Because every time, so as you move along on your sequence, every essentially function satisfies the corresponding order of equation. So they're all connected in that sense. So essentially what we're saying is let me write it like this. So similarly to that, I'm going to have that critical point satisfy some equation. I should say they have some property, but let's say satisfy some equation. Of course that would be an associated formula. And what I was saying here is that let's say there is a connection here. The connection is u epsilon, say let me write that case just to make things precise. u epsilon converges to e naught as epsilon tends to zero. And these say it's done in the context of this new type of convergence between the families of functionals that is coming from those functions. So once again, the message, the tickle message here is I have a sequence of complicated things. Suppose I can find minimizers for these complicated things and now next I send epsilon to zero and I want to bring with me the minimizers until I get to the very end. And the very end may be very complicated. It may be something that doesn't have an integral structure, which we can see later. But still you have some properties to study. So how many minutes? Probably a little bit more than 22. So now I'm going to switch to slides. That's a very good question. It is a very hot topic right now, but it's called gamma convergence of gradient flows. And there are some results for some gradient flows, not all of them. And this is something that a lot of people are working on. This was something that was introduced in I think about 2000, I think 2010 more or less. So if you're interested, you should go get the paper by San Die and Sir Fatih. Because the convergence at this level, Kevin is asking why there is a convergence at the level of the gradient flows. Essentially at the level of the ablutions. So there are some results and that's where you can find them. So let's see. Projector, yes. You see the first part is really... Should I close the shades? What do you guys think? You want me to close the shades? At the beginning I was thinking to give the same presentation of the defense, but I realized that it was too much of a stretch. So this is a pseudo defense. Can you guys see? Should I turn off these lights here? Okay, so in red you see something that I'm not going to cover today. The best part, I forgot to tell you that you... Oh, man. All this work. So you see that the Alintan equation is actually what I call gradient flow here. Okay, but I will remind you of that. Okay, so essentially my thesis is made up of two papers. There was one with Ryan, one with Greg. Then there is some additional work which part of it is in my thesis. One with Marco Perocha, which is a postdoc at the moment and one by myself. I will not touch base on those. So for the moment today I'm just going to talk about the paper with Ryan, which is related to what I'm showing you. Alright, so then let's see how all these things fit together. Alright, so if you want to model situations like this which are both phase separations then you can do it for an energy and we see what it means. So phase separation is something of that type. So on the left you have two fluids in this case which are mixed together. In this case it's ethanol and gasoline. And for some reason they decide to separate. The separation here occurs because when the mixed fluid gets in touch somehow with water ethanol absorbs water and separates from gasoline. So if your car, your tank you have a mixture of those fluids and for some reason your tank freezes then your engine will not like that much because sometimes it gets just pure ethanol. Okay, so we want to model this transition from the mixed state to the separated state. So what we can do is we say well we consider omega to be the container inside which we have our mixture of fluids. The fluid inside we have some mass we call it M. And V here is all the miscible distributions of your fluid. So there are regions where you just have one of them there are regions where you have a mixture of some percentage and so on and so forth. And the intro of course would have to be M because we said that the fluid inside the container has mass and M. So what you can do from say the physical viewpoint so you know that in general in physics you call stable configurations those configurations that have minimal energy. Once you reach a state where the energy is minimal you don't want to move anymore. So we introduce an energy and the energy is an intro. The intro of a doubly and that is exactly the function that I showed you before that I wrote on the board. Why is that the right function? It is saying well because you have two wells and each one of the wells corresponds to a fluid. So you can say well when I'm on one well say if in a specific region the function has value one it means that this corresponds to fluid one if the values minus one corresponds to the second fluid if I'm in between it's a mixture. So this is a reasonable choice for this potential and then we say well as I said before minimizers of this energy correspond to stable configurations. So these minimizers are called say minimizers in the sense of Gibbs and it's easy to find functions U that actually minimize the energy. So the W function it's bigger than or equal to zero because it's t squared minus one square is something that is bigger than zero actually there it is, right? So if we can make it equal to zero then we know that that's the lowest we can go. So in particular consider this function here sometimes it's one, sometimes it's minus one plug it in W is zero so one minus one so this guy is always going to be zero. The reason why I'm defining with the characteristic function of a set is because I need to satisfy the mass constraint. It's okay, for those of you who are not familiar with the characteristic function we're just saying that on E on a set E your function is one if you're in E you're one if you're outside you're zero so I wanted to erase something so we're just saying we're just saying we're taking a function that this set is E on this set is equal to one on this set is equal to minus one okay and we have the measure that's a measure of E has to be like that because you want the integral of E to be equal to M so you see that you can see it easily you just plug in U inside the integral of U you just compute it out and if the volume of E is equal to that guy you get M alright so then what's next? the function that I wrote before and this comes in because of the following reasons so if you look at minimizers of the previous problem we'll look like this so it's essentially what I just drew we'll look like this but you can have plenty of minimizers like that no way you're going to have uniqueness it's just a matter of like drawing weird shapes they will make it they will do it and so in physics often we consider, well they consider a different model which it's in the theory of phase separations phase transitions due to other boson and E here and they consider this energy here again the goal is the same as before we want to look at functions that minimize that energy so here I just say let's try what I said let's see let's see what we can do with that well now this connects to what I was saying before with the gamma conversion so in 1983 Gertin who was actually a mathematician here in this department conjectures something about solutions of P epsilon versus solutions of P non and he said well that P minus of P epsilon refers to the minimizer of P non but also he said well if this happens you know she will also have a special property and what he conjectured is that if you take U of the same form as before somewhere is one some other places is minus one then something should happen to the surface area of T so let me just try to explain so he's saying ok take omega and he said if you look at the set say the set E where your value is equal to one then the interface or surface area is to say this guy here should have some properties it cannot be just anything it should have some properties and he said well this guy here should be the shortest path that you have to separate omega into regions among all the functions that only can take plus and minus form so essentially he's telling you this you can have sets like this one and minus one, one and minus one but you will not recover them as a limit what you recover as a limit is this guy because he's a straight line and minimizes the distance between the two points conjecture is like that is true somehow the energy when epsilon is small looks like epsilon times the perimeter of the interface and the way to see it you remember earlier I told you that we wanted to find this context in which minimizers converge to minimizers so somehow this is telling you that if you divide everything by epsilon a minimizer of that guy will converge to a minimizer of the perimeter between the two faces so this is the connection between the two things I don't know how it came up with these conjectures in fact these are all true so alright and this is what it looks like so on the left you have any minimizer of p0 can be one minus one anywhere on the right you have what you recover as a limit with a specific structure and this was proved in a variety of settings well let's say in a variety of settings but all relying on gamma convergence and gamma convergence was introduced by the Georgie in 1975 so usually we write it like that with a gamma on top and the functional that you recover on the limit is a perimeter function again it's the length of the guy that is connecting okay and again this is saying minimizers here converge to minimizers so that is to say your interface in the limit we want to minimize the perimeter alright so p is the perimeter c0 is a constant doesn't matter what it is you can just compute it explicitly if you want does this make sense yep okay alright back to the function to the equation that I showed you before so the equation that I showed you before playing around with the energy and integrating by parts and so on and so forth and then introducing the gradient flow was this guy here the only difference is that I wrote this triangle for the second derivative right so we have entire dimensions we just sum up the second derivatives um you can think of it as a second derivative this was exactly the equation that I found by exploiting essentially just integrating by parts right and this was actually introduced as a model for starting particular crystalline structures by Alan and Tan at the end of the seventh so what they say is is the following so if you have a crystalline structure of this type here you have silicon and some other weird component here in some cases you can absurd the following phenomenon so you will realize that inside this yellow triangle the atoms the positions of the atoms are swapped if you compare them to the ones outside okay so there are these regions where instead of you know white square, white square, white square instead of the white square you have a black dot okay so they swap and they were observing how these regions were evolving inside their configuration and they they essentially introduced that model to study the evolution of those but this we can skip because it's something that I would not talk about but as I said before the connection between the two things is that the U T blah blah blah is the gradient flow of that atom so again the energy integrated by parts and so forth okay so all the rest in this slide we don't need it and we can ask essentially the question that's being the core of my research which is okay so the first question is the following so if you take an initial value for an initial daily for your partial differential equation how does it work okay so you fix epsilon now fix epsilon suppose epsilon is small it's a small guy you start with an initial configuration of this type okay now the question is how is it going to look like as time evolves is it going to evolve at the same speed okay so it's an interesting question and in fact so the answer to that question is no it doesn't evolve at the same time and there are different phases that one can consider there is to say the first phase it's the phase in which you started with the initial configuration that has some zero initial function and what happens to this function is that the so the function itself wants to stay as close as possible to plus and minus one okay and it develops these segments which are almost vertical what does it do that because think about this epsilon is very small morally speaking if epsilon was zero we were seeking to minimize the perimeter okay that is to say we want to be most of the time plus or minus one I mean we are either one or minus one and we want to somehow minimize the time in which we are not one and not minus one so the way we transition between minus one and one has to be the cleanest possible okay so roughly speaking this is what we want to do and another way to think about it essentially what we are saying now is that minimizers of this energy more or less correspond to solutions if you think about minimizers here you would realize that you want to stay as close as possible to minus one because that guy would be zero okay any time you have a derivative you want to have this derivative in a small interval you don't want to have your derivative too big because you can play with epsilon squared and still make this guy very small okay so if you are essentially going from minus one to one in a in an interval of size one over epsilon okay this guy would be still very small if the derivative is of the size say you know one over epsilon to some power so that it is very small then you are good okay so this is what happens first and what happens second is the slope so in 1989 both people so it seems that all this stuff was born in CME well, Lagrange wasn't but but yeah okay so yeah Bob Bego is a genius he's done so many things it's unbelievable and in 1989 he was starting this problem and he said well if you start from a configuration of that type what's going to happen next is that the zeros will move close to each other and they will do it very very slowly and we will talk about it later and then Chifu Chen well what's next is that when the zeros are very very close they will merge okay they will merge actually they are going to disappear so what you want to do is that you want to the least amount of times as possible between minus one and one so if two zeros get very very close what you do is this so you are going between one and minus one the zeros get close and you kill the zeros now as I was saying before what I wanted to study was a slow motion so once again you get you take the valid configuration and you start with a configuration now which is the one that has almost straight segments there yes how do you determine which two of these will merge okay so that's a good question it depends on the distance so the ones that start with the closest distance will converge towards each other yes so so now yes I don't understand is zeros merging so it looks as if when two zeros merge then let's say the first two zeros that are merged then the area where it's negative one disappears that's true but you are always one so like you don't care about the mask that's a good question that's a good question what I think was something that had to happen over there as well so I think yes in order to preserve the mask constraint the rest of the solution will adjust but actually I have to say okay so wait a sec for the other kind of question you can observe the same phenomenon even without mask constraint so there is to say just stop thinking about the energy and just take the PDE the PDE itself does not first start the mask and so if you just take the other kind of question as Peggo was doing then you don't care about the mask constraint okay now if you have a mask constraint yes you have to adjust okay and so something else happens okay so what I was saying is then that we start from this scenario and the viewpoint that it's of interest to me is the one associated with that so now we know that there is a relation between the energy and this PDE and but okay but the way Peggo and Carr proved that result was not based on this relation and I'll tell you in a second what they did the year later the year after from Zard and Kohn proved more or less the same result but exploiting the fact that there is a connection between that energy and the PDE so in particular they somewhat proved that those points those red points move very very slowly and the speed is of the kind one over well it's of the kind epsilon to the K or something like that where K is an integer so say speed epsilon okay so this is okay so we can so those are too much of symbols to think about this the result of Brunsau and Kohn is really based on a good control of the energy so they say the following thing if you start well if you consider a function which is almost like a jump function so B there is a jump function okay so one and your function is like this then they say what you can do is you can plug in this W inside your energy and what you will find is something interesting in the sense that you can say that you can bomb from below the energy with a specific constant now here there are some technicalities but if you remember more or less speaking when epsilon equal to zero you have a perimeter but if you're in one dimension there is a number of jumps so the interphase are jumps so you just count them so this was the gamma limit this is what happens to the energy when epsilon is equal to zero and this is telling you essentially these plus that are telling you the way you're converging to be not there is a convergence between E epsilon and E zero in on these results are telling you how fast you're converging there okay so if you want just think of these as the following in the following way we have a way of making E epsilon conversely not we want to understand how fast the reason why I showed you this here is because this is crucial now the result we're starting to call is giving us let me do this skip because I think in the end it doesn't matter so the result we're starting to call is giving us a lower bound of these guys so there is to say an upper bound of the guy with the minus sign right so if you have a good control on that you have a good control on the right hand side okay so there is just one small difference between these and that which is the epsilon to the minus one but this is something I think probably went too fast on because the energy that they consider is what I wrote before divided by epsilon so we're just dividing everything by epsilon but it doesn't really matter we're dividing but it doesn't matter because Gertin told us that if you want to converse something you need to divide by epsilon okay it was the third conjecture but in the sense I mean in the sense of like so far it doesn't really matter okay so now the key is once again the following you want to have controls on this guy and on that guy for this guy you assume it for this guy you prove okay once you have that you have this integral is small well yes it's small because epsilon is small now as long as this capital T is big your ut is going to be small so roughly speaking what you're saying is my time derivative is small so my function is not moving much in time so we call this solution alright and this is the way you write it mathematically but there is just one thing that I want to stretch which is what to stress which is the following do you know how big the interval is in which you don't move much so this is the result the way that we approve it and if epsilon is very small this quantity is very small right so it says well I started that I wasn't that far from the jump function as over there and I'm still there because this interval is small but there is a little bit of an extra result here because you're close in a big time interval so for all keys in these interval which is of the kind which is size one or epsilon to the k okay alright so a little bit of history here so I told you that both Bego and Jack Carr proved it but actually their result was much stronger so they proved it over the year before their method is really based on PDE's and ODE's most they actually and they can say that the interval in which nothing moves is actually exponentially big so you replace this with something exponentially big depending on epsilon I mean much better much better and actually they know exactly how big it is like they have the sharp result of course it wasn't enough they also know exactly how the points move so they have an ODE that says the points, the zeros of the function move according to these ODE so their result is that I mean you know everything about it okay so in 1995 Grant adopts the same technique that the same technique that Ron Sarenkoin introduced to recover partially the result of both in the sense that he can show that the interval in which you're not moving is exponentially big again but he doesn't exactly what the constants there are he knows that it's of that order but doesn't exactly help him and then okay there are some related results but the only one I want to point out is the one by Benettini in Ayam Novaga 2015 they essentially prove for the first time they prove the result of both and check by using an energy method and they do have the sharp alright okay so it's almost an hour it's over the night I don't even know where to start from okay so now you can imagine with this equation in dimension n bigger than 1 it's much more complicated just because in dimension equal to 1 there's not really much going on from the geometric view point you jump across a point at that point if you go higher if you go in dimension equal you're gonna start jumping on things that do have a positive dimension so you may start asking yourself does that matter so the way I can put it what am I gonna say here let's see okay so let me just I'm gonna take another like 10 minutes maybe if an hour is it's probably too much but no I mean I could but it's gonna get so abstract at some point that I'd like to try to keep it a little bit less abstract anyway so in dimension bigger than 1 the the scenario is the following one you have a container omega with the same double web potential okay in order to somehow keep track of what's going on with this geometric issue somehow I'm gonna introduce this function here so this function does the following so you feed the function a number that corresponds to the volume the function goes to look inside your omega for all the sets with the same volume and gives you back the perimeter the smallest perimeter of all the sets so it does the following thing so you are inside omega you consider all the sets with a given mass say this guy here this guy there and so on and so forth and then you start looking at the perimeter so the perimeter are this or that okay and you can find as many as you want but your function there is gonna give you the minimum of all of them so of course you can ask yourself what is the regularity of this function is it continuous is it differentiable you can ask yourself a question like that which would be interesting but let's see how this plays in this context so so first of all this slide here looks made of theoretical in the end what it is it's just the other kind of equations written for inundimation so there is just one small modification lambda there which relates to the question that Giovanni asked because what we did with Ryan is we used this version with the Lagrange multiplier so this version preserves the mass but it's not really important for what I want to tell you what is important is that in 2015 actually less than two years ago Giovanni, Leone and Ryan proved the result that looks very similar to the result of Bronsar and Kohn so if you remember Bronsar and Kohn said we can control the energy associated to an even function from below with some quantity the first quantity was the so called gamma limit the second quantity is something that you find so the setting is essentially the same you have your higher dimensional setting you have a function v that is somewhat your higher dimensional jump function you have a mass constraint here because of the new equation that we are starting but they also needed to require the regularity on that function there so there is a technical point which I'm not going to be able to explain but somehow you need to have an understanding of this function in order to get this result so this is the place in which the geometry of the domain comes into play now what is e0 e0 here is a volume constraint global parameter minimizer what does it mean it means that you fix the volume and among all the sets inside omega that e minimizes the parameter so it's once again that picture there why am I saying global parameter minimizer well because you may be minimizing the parameter among all the sets in omega or just in a neighborhood so an example may be this one so just to keep an idea of the difference between global parameter minimizers and what not so if you think your omega is a box now and say you fix volume of sets say e and you want to find among all these sets say m among all these sets with volume m the one that minimizes the parameter so the one that has the smallest perimeter among both all of the sets would give them volume so what you can do is just your set will or actually something you can do something better inside of omega you come this thing and I don't know if you've seen isochromatic problems anywhere but the isochromatic problem is the one that tells you that in say r2 on the plane among all those sets with the given volume the one that minimizes the perimeter is the circle so this is telling you if you essentially flip this thing four times to get a circle that's actually a way of proving but anyway so this is global in the sense that any other set you can take in here this guy is going to have say the same mass but bigger perimeter but you could say okay in a neighborhood here and if you just look inside here the one that minimizes the perimeter is a box you cannot touch the boundary you cannot cheat and forget about that everything you have inside here so this is a global perimeter minimizer this is a globe alright so in this case we have global perimeter minimizers and so we have that our function is jumping across one of those so that you can think of something like this your function is one here minus one there you can show something about this function here you can show something about this function here that is to say you can prove that if you subtract off the parabola and that gives you quite the differentiability really what I would like to tell you actually just in the last probably five to ten minutes is what Ryan and I did we also put something about the other part but what we've done is really to localize this function to include situations like that so the first function was telling you if you take the min let me just write it as a min of the perimeters perimeters so this was the isochromatic function i is equal to that but again if you consider this if you consider this minimization problem what you end up having is this so our question is how do I get that now these as connections to applications because as you've seen you need to require a large amount of that function in order to prove the energy lower bound if you have the energy lower bound you plug it in inside here and you get something about the evolution of solutions so the procedure is I write this or I go look for a bound the bound requires regularity of that the regularity of that and actually the very definition requires you to understand something about configurations that is to say in this case only configurations like this right but if you want to jump across these functions here then you need to do something else because your minimization problem will have to speed back so what we've done is we've introduced a constraint here so let me just follow more of the board rather than the slides in the sense that I'm going to try to give you simple so here we're really saying we're adding a closeness condition of your set B to some configuration to some reference configuration we're going to be we want to be as close as possible in the sense of depth so your reference configuration is something that we pick so we say fix in on and go look for all the minimizers of this problem but they need to be as close as possible to well not as close as possible but they need to be close to be not in the sense of depth okay now you have a completely well not completely new but a new minimization problem and you want to understand now you want to understand what this function looks like as a functional because if you have regularity once again you have the energy over bound, if you have the energy over bound you plug it in, if you plug it in you get an estimate on ut so there is just one thing that I want to show you and then I stop there are some cases in which people stop, that is to say you can say exactly what the function is going to look like so let me tell you why so what is the idea the idea is that you have your domain you fix your reference configuration in this case let me do that in this case your reference configuration will be equal and you go look for all the sets which are close in the sense of depth constraint to look for mesers now you find the closeness condition really depends on the volume right so I haven't really spent a lot of time on that but the closeness condition is something that is given on the volumes so if the volumes are close we consider the sets to be close but we say the following we claim that it is not possible to have something like this because if you do this you pay a lot in print so this cannot be an image so what we say is that this cannot happen so the tentacle has to stop somewhere but then once it stops there are very good results which are like yes, currently function that I told you before now you're not touching and what you can say is that you can just take a ball with the same mass so instead of this guy you replace it with your favorite ball and then the asymptomatic inequality is an equality the perimeter is equal to the perimeter of E inside omega becomes the mass of E raised to the power of dimension minus 1 over dimension times constant so we know exactly how it depends on the mass so I think I stop here I think it's the right any questions? yep I plans that your slide it says something about boundary dimensions less than 7 is there something interesting about that or is that just technical? so for our purposes it's technically in the sense that what happens is the following scenario so when you look at minimizes of the perimeter a question that you may ask is about how smooth the surfaces are so if you're here this guy will have a same infinity surface now what happens is that if you go in dimension higher than 8 or above your surfaces start having singularities so then that of course complicates things because we use at some point information about how smooth the surfaces are and we don't want to get into messy details now messy details honestly I don't know how hard it would be to generalize it because there are a couple of results that rely on that and so we would really have to spend a lot of time trying to understand but that's a very good question thank you so that is a result by feather I don't know if you guys know the measure book super hard to read so so in the one dimensional case does the Zeus collapse even for a time or like this they do it's just that it takes a long time so you can think of this you can think of Epsilon being small and so then these guys would move at a very very slow speed and yeah so they would collapse in time time it's just that they do it in a very slow fashion actually when you say slow is this something that that slowness is relative to it's slow because the time derivative is small so because in the end it is what matters right so it's really in the sense of speed the derivative of the function for a time okay compared to Epsilon compared to Epsilon well yes yes in the sense that so this is the thought process you fix Epsilon first and then you look at the time derivative so there is an interpretation so I haven't seen this kind of contained isoparametric inequalities that's it it's quite interesting how you say when you take a small local thing if it's strictly inside the container then you're just using the normal isoparametric inequality the server's worth bit but globally yeah you just want to cut up a corner you can still sort of obtain that if your local thing was centered around this corner so what would sort of happen if you continuously moved this where you're considering the local minimum from a corner to say in the middle of the surface like how does the geometry of this that is a good question so you're asking how the essentially how okay that's a very good question so you're asking how the you know how the function depends on the constraint so how the actually how the function depends on the reference set which is fixed as you move that's a very good question so what I can tell you is the fault so if you so okay alright so if you say take your call to be dead okay this is just I'm just talking about the reference set which is above now here it's a matter of playing with delta it's a matter of deciding how close the competitors you want how close the competitors is to be because if you want them to be very very very close then you're still going to have a fault so you can play we can play with that but I think that maybe what you're thinking about is you fix the closeness and then you start moving them straight potentially yeah so there are many things that one can do yeah so I don't know again one possible thing I can think of is if you fix the closeness and you start moving your reference configuration and then the question is when would they jump from a ball to that guy and that's a good question and we haven't investigated that because for our purposes we fix the reference set but yeah it's interesting that that would mean we would see the function instead of as a function of the mask as a function of a more technical and it is a little more technical than today but the first part is