 I'm doing it. Okay. We have two minutes left. What's working. Point. Okay. So. For today, for analysis seminar, it's a paper for me to introduce Miguel Urrano from Cal State. You will talk today about which is learning and if you need that. So, thank you. Thank you, Christian. Let me start by thanking Christian, the meow and the Emmanuel. Thank you for the invitation to visit. It's been a wonderful week here. In a place that reminds me a lot of the. Which I have very fond memories. I visit the input for the first time. Twenty years ago. And this is my first time hearing. I see. So thanks very much for the week and for the invitation to deliver this talk. I've been working for some years on a singular and degenerate PDs. And I've been finding different applications. These equations in different areas. And today I want to talk about an application in machine learning. That I recently came across. And I think it's quite interesting. So I want to say a little bit. What is the vicious learning problem? And then I want to talk about the infinity of last equation and some of its properties and how we can use hard analysis to solve very contemporary problems in machine learning. So a big problem in machine learning is the problem of labeling data. So if you deal with the large amounts of data. You don't need in several situations, you need to label this thing. For example, think about medical images. If you have large amounts of medical images, you want to know if each of these images corresponds to. A situation of concern. Or if the image corresponds to a healthy scenario in which you don't have to do anything. Of course, if you have to pay a medical doctor to go and see all the images and label them as healthy or unhealthy. This is going to be very, very expensive. So if you can in some way use just a few label data. And then somehow explore. The purpose of the unlabeled data to learn a function that somehow extends the label data to the unlabeled data. Then you have a great advantage. Another example is classification of website visits. So companies want to know which advertisement station used to target. So for example, now I'm interested in finding a bicycle. I'm interested in finding a lot of websites that sell bikes and sites with information about bicycles. So if there is an automatic way of knowing which sites I am visiting, then this company is going to target me with advertising corresponding to my current interests. Again, if they have to pay someone to do this label, say I hand, this is going to be very expensive. So the idea here is, and this is what's the object of semi so called semi supervised learning. You use very few label data. And then you explore the structure of the much larger set of unlabeled data to somehow extend the information from the labels to the unlabeled. Of course, doing this, the way I just described the problem, the problem is extremely opposed because there are many ways of doing this extension. Okay, so in order to hope to have some sort of uniqueness, you have to introduce what's called a smoothness assumption. So you have to do this, assuming that this unlabeled data changes smoothly in this regions of the graph because what people do is that they give this data set the structure of a graph. So each data point becomes a vertex. And then edges correspond to some sort of measure proximity between the data points. Okay. So this smoothness assumption is implemented in practice by the minimizing a certain quantity, or in the continuous case a certain function. So the first thing that people thought was to so here is just a picture. So the idea that the data set is the structure of the graph. And so the first attempt is to use the so-called Laplacian regularization. Okay, so what you want to do is this. So you want to, you are given a function G say on this is the set of label data that you know. Okay, so this is the boundary of some set. So you want to find you which coincides with G on this but it's defined in a much larger set X. And you want to extend G to the set X minimizing this form. Okay. So here you have the square of the distance. This is the weight that measures the similarity between the points. So typically you can think of these effects and why are close. This is one, they are part of this is basically zero. So if you pass to the continuous limit, basically what you are doing is minimizing the visually energy. So the auto norm of the gradient, and we know that minimizers of this functional are harmonic functions. They solve Laplacian. Okay. So this is a way of trying to solve the problem that what people realize is that in this case when you have very few labeled data. So you have very few information on the boundary. Then this minimization problem leads to a sort of degenerate solution. So the solution somehow becomes constant, the sort of average of the labels. Okay. And then the labels are obtained in a discontinued action. So they jump. The function jumps to obtain the boundary data. So if you are familiar with the language of solar spaces and so on what's happening here is that you are solving a problem. Finding a solution so a minimizer lives in each one, the solar space that we want to. And since we are dealing normally with functions that live in space is a very high dimension, then functions in W12 will not be continuous up to the boundary. So they will be discontinuous. And that's why we observe this sort of phenomenon. So people started thinking of instead of using two, using the much larger power here in the minimization function. So if you put here P, a very large P, what you're doing is that in the minimization process, you are discouraging the choice of functions that change a lot. Because if you have here, if this difference is large, then these two the power P is going to be a huge number. So you're not going to choose this function. Okay. So you are imposing a much better smoothness. Now, again, if P is very large in particular if P becomes larger than dimension, then in the continuous scenario, what you're doing in fact you are minimizing the LP norm of the gradient. Your solutions will be solutions of the so-called P Laplace equation. And they live in the space W1P. And if P is very large, it's going to be larger than dimension. And then we know that by so-called embedding, in fact, functions in W1P are holding continuous up to the boundary. And so the boundary data are obtained in a continuous fashion. And so this is what's observed in practice here. The graph is a much smoother surface. And the boundary data are obtained in a smoother way. Now you think for dimensions that are increasingly higher, I have to take P larger and larger. So why not take P equal to infinity. And then you will have P greater than dimension for any dimension. And this is exactly what happens in the context of the so-called lift sheets line. So instead of the LP norm of the gradients, you take the, you minimize the LP norm of the gradient here. So you replace this sum by the maximum. So in the continuum case, you're minimizing the LP norm of the gradient, which we can say is the lift sheets constant of the function because for convex domains, these two things are the same. The L infinity norm of the gradient and the lift sheets constant. And then your minimizers will be solutions of the so-called infinity Laplace equation. I will say in a while what this is. But now the problem, the lift sheets learning problem is this. This is not even a function which is lift sheets on the boundary of the domain. Okay, you can, you can think about this, this gamma here as being the boundary of a domain omega. And what you want to do is you want to extend this function, this lift sheets function defined on the boundary to the interior of the domain. But you want to minimize the lift sheets constant. So since of course you cannot decrease the lift sheets constant, you can, you can do is to keep it the same. Okay, so this is the problem in lift sheets learning in analytical terms, this is what you have to do. You have given a function on the boundary of a set. This is your label data. And you want to extend it to the interior of the set to the unlabeled data that you have, which is much larger, a much larger set. And you want to do this without increasing the lift sheets constant. So this is the lift sheets extension problem, which is quite well known in analysis for more than 100 meters. Okay. So this is what you have to do. Then we solve this problem. Perhaps we can. Perhaps we're going to solve it in the next 10 minutes. But before we do that, let me show you what happens in the case of lift sheets learning. So you get indeed a much smoother surface and the boundary data is obtained in a much smoother function. And what is the infinity law class equation. So the infinity law question is this highly degenerate nonlinear operator in non divergence form. So you have this combination of first order and second order derivatives. And then we call it infinity law classes. So you would expect that this appears as some sort of limit when it goes to infinity in the bill of loss equation. And that's indeed the case. Let me show you about how we do this in the sort of the heuristic type of way. So the bill of loss equation is so you have the divergence. So you get the divergence in Q to the power of minus two. Great and Q equals zero. So it's peace to, of course, this is one, and you just get the divergence of the gradient, which is the law class. So you get what was equation. Now let's take the divergence inside. What we get is this. So we get EU. So we get two divergence of the gradient, which is the law class in. And then we take the gradient of this inner product this. Okay, so when we take the gradient of this, we get the exponent P minus two. Then the exponent to the power P minus two minus one, which is P minus three. And then we have to take the gradient of this, this. And then we have to take the gradient of this gradient. So you get EU divided by the module of you. And then you have to have it. Right. And then you have product. You can pull zero. So this is what we get. Okay, now I'm going to put this guy. Over there. So here I have to the four equals zero. And now I'm going to divide by this. Okay, because I was equal to zero. So P minus two divided by P minus four. I get your square. Okay. I get your square. And then I divide everything by P. So if I divide by P, I get here one over P. And here I get one minus two over P. Not us between three. So this does not appear to be when he goes to infinity. This disappears. This goes to one. And I'm left with this, which is, in fact, in your class in a few people's. Okay. Of course, it should be written. Right. So it's a hassle multiplied by the gradient. This is a vector in front of the gradient. That's precisely the. That's the way it works. So, in fact, He's obtained from the people passing when he goes to infinity. Of course, this has to be properly justified. But for that, you need to use the language of this cause the solutions. Okay. Because that's the way to interpret. In the, in the right way. What is the solution of the. You see that the equation is not in divergence form. So we cannot integrate by parts by after multiplying by this equation. So to understand this equation, we have to use the language of this cause the solutions. I'll do it in a little while. So, but now I want to go back to the, to the. Lipschitz extension problem. Okay. So let's just speak some notation. So we're going to call. We're going to say that it is Lipschitz. We're going to use this notation. Right. We're going to call this Lipschitz on the set X. And the, the smallest of the Lipschitz constants. We're going to call it. Lipschitz in X. Okay. So this is going to be the. The least constant L for which the Lipschitz condition is satisfying. Okay. So we're going to denote it by Lipschitz. Okay. So what is our problem? Our problem is this. Given a function F. Lipschitz on the boundary of the domain. We want to find a function you, which is Lipschitz from the closure of the main, such that we have an extension. So you extend that. So they coincide on the boundary. And the Lipschitz constant that you on the approach review is exactly the Lipschitz constant of F on the bottom. So you extend the function, and you keep the Lipschitz constant the same. You do not increase the Lipschitz constant. All right. So this is the Lipschitz extension point. This is what you want to do when you do Lipschitz later. Okay. Let's look at an example, because examples are always instructive. Let's look at the very simple example. So let's take our domain new to be this union of two intervals. Zero one. And let's say F. So the boundary of you, the boundary of you, these in this case, a set of three points minus one, zero and one. And let's take F, and let's find on the boundary to be this function. So function that minus one and zero. F is zero. And F of one is one. So we have this, we have the interval minus one, one year at zero year is zero. And here is one. Okay. So we have this function F, you find in this sequence. Of course that is Lipschitz. What's the Lipschitz constant of that. So one is leap of that. On the boundary of you just have to compute all the, all the quotients, and you immediately see that the Lipschitz constant is going to be one, right? It's F of one minus F of zero divided by one minus zero. This is the largest one. Yes. I mean, if you go this minus this you get one minus zero divided by two, which is one half. And with these two you get zero. Okay. So the largest portion is one. So this is the Lipschitz constant. So, what we want to do, we want to find a function defined in the closure that goes through these three points. And which has a derivative that does not exceed one. Okay. So the slope cannot be larger than. So we have to connect these three points without increasing the slope, which is one, right. And this is not a very difficult problem everyone can do this. Okay, but maybe if I ask you to do it. Different people will do different things. Okay. Because there is more than one solution. The problem is not uniquely solved. So, right. Now, let's think of the problem. Let's, let's not try to go from the example to the general case, let's try to solve. And think about this. So let's take a problem point Z on the boundary of you. Now in general, forget the example. And if I construct this function F of Z minus the Lipschitz constant that and the boundary of you. X minus Z. This function of X or Z fixed on the boundary. Always less or equal than a solution if you is a solution of the problem. This is going to be true always. What just do this, bring this you to the left hand side and this to the right hand side. So this becomes. So this is equivalent to what I'm going to write. So the Lipschitz constant of that view. X minus Z. Now, if I'm solving the problem. You see that F of Z, C is on the boundary. And on the boundary points with you. So here I can put you. If I have a solution of the problem, the Lipschitz constant does not increase. So the Lipschitz constant of that on the boundary is exactly the same. If I have a solution as the Lipschitz constant of you on the board. And now this is obviously true. Okay, because I have this difference is less or equal than the modulus. So this is the net definition of being Lipschitz. Right. If you is Lipschitz then you will see minus you have access going to be less or equal, but it is constant. X minus it. Okay. So this is obviously true. So this holds, always, whenever I choose a point Z on the bottom. So this is a constant minus a constant times this. This is the modules of this difference. This is a common function. This is of course the lipstick. And of course, this leap of that from the boundary is going to be a Lipschitz constant for this function. And you see that this Lipschitz constant does not depend on Z. So if I if Z varies on the boundary, I'm going to have a family of Lipschitz functions with the same Lipschitz. And if I have a family of functions with the same Lipschitz constant, then if I take the supremum of these functions, I'm going to get a Lipschitz function with the same Lipschitz. And here, I can take now the supremum when Z belongs to the boundary of you, sorry, of all these guys. And this is going to be a Lipschitz function. And it has this Lipschitz. And this is going to be a Lipschitz function in the culture of you, which has this is Lipschitz constant so the Lipschitz constant this did not increase. And this is, if this coincides with that from the boundary, then we solve the problem, because this will be a solution to the Lipschitz extension in fact. Right. And in fact, this function is soup coincides with that from the bottom. Why, because you see that. This is less of a boundary. This is less or equal than that of X. This is all this, right, because again, this is just the definition of being Lipschitz, you do exactly the same as I did before, you bring this to the other side this comes here. And of course, since the two points are on the boundary. This is the Lipschitz constant that on the power, this is just definition of Lipschitz. If X is on the boundary, all these guys are below that. So if I take the supremum, this is going to be less or equal than that. Okay, so this soup is going to be less or equal than that. But it's also going to be greater equal than that, because here, if X is on the boundary, I can take X equal to X, V equals to X. And here what do I get inside if Z equals to X, this is real, I just get an X. Okay. On the boundary. This guy here, which is what we call MW subs f of X this function. This function this soup. The supremum of this cons coincides with that from the boundary so this is an extension of that to the interior. The Lipschitz constant with the same Lipschitz constant that we don't. So this is in fact the solution for the Lipschitz extension. It's what we got the McShane Whitney extension. And it's a solution for the problem. And of course, if I take one on the boundary and I consider here plus, and then I consider the infimum of those guys. I'll get that. I'll get another solution with which is the McShane Whitney extension, upper extension of that bond. And so any other solution of the problem must be between these two is to machine with the extensions. Okay. Now, the thing is that these two guys rarely coincide. And so the problem is not uniquely so. Let's see what this is in our example. Okay. So let's, let's see what happens in our example. So, we have to, we must take the supremum between in this case. So the boundary has three points. So we have to take the soup between three times three codes. Okay, it's going to be the soup. First, Z minus one. Okay, so we get F of minus one, which is zero minus the Lipschitz constant, which is one X minus minus one. So we get minus X plus one, right. This is the first one. Then when Z zero, we get F of the zero which is zero minus one X minus zero. So we get minus modules of X. And the third one, when Z one we get F of one minus X minus one. All right. So let's build them here. So what is this function X plus one the modules of this minus. So this is going to be this call minus this is going to be this call. And this one, one minus X minus one. So you see that the, the vertex of the cone is going to be in one. The cone is facing down. So you get exactly this call. Now you want to take the soup of these guys. Okay. So you see, you don't care what happens to the right of this thing. Okay, to the left of this or to the right of this, if you don't care. So what's the soup. The soup is going to be this function. Yeah. Okay, build the machine with an extension. The lower machine with the extension. So you see, in fact, we have connected these three points. And we didn't increase deletions cost because the slow. Here is one of the modules of the slow is one, and here is one. Now, the rubber machine with the extension is going to be. Now, we think about it, it's going to be this function. Okay. So the others must be. So here they have to coincide with the identity. There's no other possibility. But here, they are between these two. These two colors. Okay. So, there are many solutions. So another solution would be this. This will be another solution. So, right. We can solve the list extension problem, but the solution is not the one we want because we have no uniqueness. And this is not good. That problem is still extremely opposed. So we only have uniqueness when these two machine with the extensions coincide. Right. And this is we've seen by a simple example can well not happen. So that's what I wrote in the next slide so the machine with the extensions are these functions. Yeah, that we just defined. And we solve the problem. Right. So the machine with the extensions, so the lipstick tension problem forever, and any other solution must be between this. So in fact we solve the problem, but we don't have any. And this is not good. So how can we find a way to solve this problem and have some sort of units. So we have to ask for something else, in order to have uniqueness. Okay. And what we're going to ask is that this property is sold also local. So if you are minimizing the L2 norm of the gradients in a domain on it, and you find a minimizer. If you go to a domain which is inside strictly inside on your solution will be a minimizer in this domain. So in fact, when you're minimizing the L2 normal or the LP norm of the gradient, your minimizers have this kind of local property. So inside for domain they are still a minimizers to the problem inside, but here this doesn't happen. So think about this solution, for example, this one, the upper machine with the extension. So if I go. And so this is a solution to the problem in this domain. But suppose I can see that here's a domain of the original domain. If I look at the value of the function on the boundary, these two will coincide. So the list is constant on the boundary here, going to be zero, because you just have two points and they are the same. But when you go inside the domain, you increase the list constant, because the list is constant is one. So your solution in the domain you, when you go to the sub domain V is no longer solution to the problem. Because if you look at which is constant on the boundary of this domain. Then the solution going to the interior will increase which is constant so it doesn't solve the same problem inside. So locality is not a property in this case. So, maybe the key to obtain uniqueness is to somehow impose this notion of locality. So you need to ask that your solution to the list extension problem satisfies this local product. And you see another example, the only function of all the possible extensions that satisfies this property locally is the one that probably you thought about when I, when I put the problem. Exactly is to go like this is this one. Then you connect this point to this point like this with derivative zero and then you go up. And in fact for this one, if you go inside the domain, the same problem is solved by this function. And this is in fact unique. And that's the way to to solve the problem is to ask for, not the list extension, but what we call an absolutely minimizing limited extension that we abbreviate this to our way. To minimize the list extension are the way new. It's a continuous function that solves this problem at every level say right for every be strictly containing you. When you go from the boundary to the interior, the list is constant does not increase. And we can recast our original problem in the following form, even at leaf sheets on the boundary. We want to find a continuous function on the closure of you that extends that. So it's still an extension, which is absolutely minimizing issues. So it's not only solves the list extension problem but it solves it at every scale. So this is a subset to be of you. This function is such that the leaf sheets constant going from the boundary to the interior does not increase. Okay. And this is the right way to solve the list extension problem. This is the right way to approach the leaf sheets later. Now, magic happens. The solution of this problem. The extension which is absolutely minimizing lip sheets is precisely the viscosity solution, the unique viscosity solution of the infinity Laplace equation corresponding to this boundary. So solving the lip sheets extension problem in the right way, in order to get this is in fact the same thing as solving the PD in the viscosity sense with this boundary that So these notions are equivalent. There's another what you met with type of notion, which is called comparison with cones that is equivalent to these two things to be absolutely minimizing lip sheets or to be the viscosity solution of the infinity Laplace equation. So the cones that appeared here, right, so these functions. So, the family of functions here, there are all cones. And so what come is what you expect is a function of this, of this type to have a constant plus another constant motions of x minus x zero x zero is the vertex of the cone, a is the height of the cone and be is the opening of the cone as a function of the cone function. And we say that a function enjoys comparison with comes from above. If this thing happens for every these strictly containing you and every cone whose vertex is not in the if you, or if value is below the cone on the boundary of the, then it's going to be below the cone in V. So this is comparison it comes from, and of course comparison comes from below is exactly what you expect right. It's the function it's about the time on the boundary. Then it's going to be about the term also in the team. And you have a function enjoys comparison with cons enjoys comparison with comes from above and from below. Okay, so these are some kind of dual definitions that will be connected to the notion of sub solution. So this is the sub solution and this calls it the super solution. So this calls it the sub solutions enjoy comparison it comes from below this calls it the super solutions enjoy comparison it comes from above. Okay. So, what we can show is that enjoying comparison it comes and being absolutely minimizing lips are the same thing. The function a continuous function is absolutely minimizing lips sheets, if and only if it enjoys comparison it comes. Okay. And let me just show you the sufficiency of this condition, right, let me just show you that if you enjoy comparison it comes, then you is absolutely minimizing because it's a very, I think it's a very nice food. So let's suppose that you enjoy comparison it comes. And let's take any V strictly contained in you. So what we want to show is that the lips it's constant of the function you remains the same when we go from the boundary to the team. Okay. Now, if these people contain the new and the function V. And the function you is continuous in you, it's going to be continuous up to the boundary of the, because the boundary is pretty good on them. And so, it's an exercise to show that the lips it's constant the viewing V coincides with which it's constant, the view on the boundary on the closure. Okay. So, since this is one time here, of course, we're going to have three really that this is greater recall. And so, what we need to do is to do the other. So this is the only thing that we need to know. So let's first observe that for any x in the this equality codes. So if I, if I try to compute the lips it's constant the view on the boundary of the, from which I remove the point x. Then you see the boundary of the set is going to be the boundary of the plus the point x, right. So let's show that this delicious constant here is delicious constant. That's where I have a point to the boundary of the delicious constant does not change. Okay. So, to see that this is true. The only thing we have to prove is that you have y minus you have x, it has to recall then delicious constant on the boundary this thing, why minus x, right, for any why on the ball. So I just added this point x. So if I compare it with every point on the boundary, and this which is constant will do this, then I don't think this is constant, of course, that's all the other points that this will work as a system. And this is equivalent by removing the modules to these three points. Right. These things here, this really holds for x on the boundary. Right, because if it's on the boundary, then it's the same thing that we did here. Right, so if it's on the boundary, then you have x minus you have why is of course less less less or equal than delicious constant the view on the boundary x minus one. This is the definition of lips. So let's focus that it also acts in green, not on the bottom. Okay, so let's focus on this on the second on this one, or the right to be the same for this inequality. So let's focus on this thing. Now look at the right hand side. What is this function. So this is a function of x, right. So you speak, why is things here. So this is what we call this function of x here is right inside. This is exactly a common. So it's a constant plus a constant x minus one. It's a code with vertex in white. Right. Center that why, which is a point on the boundary of the right. So why is a point on the boundary. So the vertex is not in the, so why is on the boundary. It's not in the, and you interest comparison it comes from above because it can dress comparison. So the quality holds in the because it holds on the bound. Okay, so this on the boundary this clearly holds on the boundary of the, this is the calm. If we have a cone with a vertex which is not in the, which is the case because the vertex is in on the boundary of the which is different for me. Then if you is below the cone on the boundary. And it is below the common boundary, because I mean this is just the way it's condition on the boundary. So this is impact is going to hold also in V, because of comparison, which is ours. Okay. So, that's just what we do use comparison comes to prove this, and to obtain the first in equality, the reasoning is the same. Okay. Now they take two points, and we use this property twice. Right. So the which is constant review here is the same as delicious constant on the boundary of the new remove this point. And then you apply the same thing you remove another point. Now, then you remove one. Okay. So this x and y belong to the boundary of the minus this because this is the boundary of the union these two points. We have precisely this thing. And we closed. Okay, so this is how comparison with comes is used to prove that you is absolutely minimizing issues. So the reciprocal is a bit more challenging, but we can, we can prove it easily. And now, let's talk about these cost decisions. So the right way to interpret the infinity or plus equation is to introduce the notion of this cost solution. The solution of the, of the PDE for those of you less familiar with this terminology. So we say that the continuous function so the idea is that you see the PDE involves derivatives of first order and second order. But we would like to have solutions that have may have no derivatives at all. A solution of the infinity or plus equation could be merely a continuous function. Okay, so what you do is that you're going to ask some kind of test function to satisfy something evolving the derivatives and the idea is this. So this closet is sub solution of the equation is a function continuous function such that for every point in you. You can compute this point. By a seat to function fine. Then this is to function must solve the PDE. Likewise, at that point, well, not the PDE but this is important. So if the function is C2, then you can compute the infinity or plus enough the function, because all the derivatives now exists in the classical sense. So you compute this combination of derivatives at the point x hat, and this must be great review and then see now. Okay. So, and you have a physical solution if this happens for every point, and everything to function that touches the function from above that. Okay. So, this is a maximum. Yeah, so that is going to be always less or equal than five. Okay, and the super super super solution is the same thing you touch from below and you ask the touching C2 function to satisfy equation in the in the classical sense. So, of course, I mean we can, we can define any notion of solution that we would like to. I mean we are great to do this right in particular. We take this to the extreme we can say that the solution to the PDE is zero percent, right. But the idea here is that you extend the notion of solution. Right. So, you look for solutions in a much larger class. Hopefully you can prove existence of the balance here is if you can prove you need this. Okay. And the first thing that you have to do is that if you introduce a notion like this. I mean this must be consistent. Like, if I have a solution which is C2. And he's a solution in the usual sense. So if I have a C2 function that satisfies the PDE point wise, then it should be discussed the solution. And the risk of the solution if it is C2, it must satisfy the equation in the point wise sense. So this must be consistent with what you expect. And this is in the case. So if you use a C2 function, right. If you use a C2 function, then using infinity harmonic in you, if only it solves the equation in point wise sense, right. You take the derivatives. If you have this combination, it's going to be so these two things are set. I'm not going to, not going to prove it, because I don't think I find I want to say something about that right, but this is of course true. Okay. We're going to have solutions of the opinion of us equation, which are not possible. Okay, we have discussed the solutions that are not see two functions. Okay, so in fact, I mean the notion of discussing solution. The notion of discussing solution includes a larger class of a function, so. Okay, it can also be shown that the function is infinity super harmonic, even only if just comparison comes from above and it is it is infinite is super harmonic, if only you can join some person that comes from below. Here, these three notions are the same. So being absolutely minimizing live sheets is equivalent to enjoying comparison it comes. And it's the same thing as being infinity. Okay, these three things are equivalent. And finally, let me say a few words about about the infinity law was a question. There's not much that we know surprisingly. And, in fact, the only thing we can show is that easily let's say that solutions are liptions of the infinity law was. So let's start with the hierarchy. So we have this. If you use continuous and satisfies this property. Then it satisfies the hierarchy. Now you see that this property is easily satisfied by if you get your money functions. Why, again, you see the right hand side here is function of x here is a couple. So you have a constant plus another constant, which is the maximum of these things. So this is y. So this is a concentrated point. And you see that the boundary of this of this ball. This condition is easily satisfied, because of the boundary of the ball. So if x belongs to the bottom in the distance of x to y is going to be our. So what these conditions are these cancels with this, the U of y cancels with this U of y, and you just have that you have x is less than the maximum. We're down on the boundary of the ball. So if x is on the boundary, this is of course three. Right. So this condition holds trivially for x on the boundary. Since this is a cone. Right. So first, you can say, but the cone, what's the vertex of the cone, the vertex of the cone is why, and why belongs to the set so you are not allowed to do that, but then I'll tell you okay. So, we move the center of the ball, remove the why. Okay. Because you can remove it because for x equals why, this is obviously trivial. Right. So if x equals why, then this is the, right. And you just have you have wireless or equal the new one. So you can remove the center, you have a cone with vertex, which is not in the set. Right. And then since you have comparison it comes, because if the function is infinity I want to keep in just comparison it comes, if this holds on the boundary that it's going to hold. Okay, so this property is really satisfied. So if you have an independent function. In the result, it says it's going to satisfy this, and then you have a hundred. Now this is a bit weird, right, because we have a supremum lesser equal than one third of the infimum, but this is because you is negative. Okay. So of course this makes sense. Okay. The proof is very simple, but maybe I'll just skip it. And from this follows that if you use infinity harmonic, then it is local elections. And the proof is differentiable almost every by Rob Marcus. And again the proof is an application of the of the high market equality. I think I'm also going to skip it. It's very simple. It's a very simple proof. So this comes almost for free. If you don't want our money functions are differentiable almost everywhere because they are live functions. Now, what have we done in the last 30 years. The best result to date was to improve this to differentiability everywhere. Okay. You know, this is a result due to evidence and smart. We know that infinity harmonic functions are differentiable everywhere. So this is a little jump from almost everywhere. But we don't even know if solutions are a classy one. So they are different everywhere but we don't know if the gradient is continuous. The conjecture is that they are classy one alpha. As I said, we don't even know if they are classy. We know more in the plan. So in the plane, we know that solutions are not only see one, but this is a result by Sabine, but we know that they are classy one alpha. So for two dimensions, which is clearly a case that in the context of which it's learning is not really interesting, we know that solutions are classy one alpha. But even into the, there is a conjecture that the optimal alpha is one third that solutions should be a classy one one third. And also this is an open and why, because this function x to the four over three minus y to the four over three. This function in the plane. You have x y equals this. This function, which is clearly a classy one one third and not better. This is a viscosity solution of the infinity Laplace equation. It's a very nice, nice exercise to try to prove that this solves the PDE in the viscosity sense. So this sets the limits to what you can obtain. So in the plane you cannot go beyond C1 with a third, but it's still an open problem to show that every infinity harmonic function in the plane is a class C1 with table. Okay, with this, I think I'll stop. Thank you very much. And if you have a question. Very nice talk. Any question. Okay. I think it was a question. Thanks for the very, very, very nice talk. Can you hear me well. Yes. It's very nice. Very nice. I was just curious at the end of your talk you mentioned in this counter examples that in the plane. You cannot do better than C1 one third, because you have an example but you don't know if you can get there. I wonder if there are also counter examples in higher dimensions I mean for each dimension do you have a conjecture of the threshold alpha or no. I mean, let's see one alpha for some alpha that people never. You can, you can, you can build from this example, you can, you can construct an example in higher dimension just adding dummy variables right. Okay. I think you cannot go beyond C1 one third in higher dimensions. And, and, but, but I don't know as an example, which is dimensional sensitive. Right. And that would set the limits in higher dimensions I think the conjecture is that the regularity C1 one third, also in higher dimensions. Okay. Yeah. So many known examples to the equation for example that tangent of x divided by y for example is also the affinity harmonic function in the discuss the sense. Right. And then of course you have the family of C2 functions that obviously satisfied the equation. But I think the conjecture even in higher dimensions is C1 one third. Okay, if you look at some application about, for example, if it's not working with the obstacle. So, can you think in some application. You get it. I mean, how you start your own. So, instead of trying to test them. You have to take the application, you have to take the application of the C1 of the, of the. Okay. Okay, so is there some application of, well, not something that that I can clear application of this but you can also of course people have studied the obstacle problem. Yeah. I think we have some nice application. Yeah. And there's work on the, on the obstacle problem for the infinity of what we do. Yeah, right. And actually in for that case, at least for a zero obstacle. We've shown that solutions reached the obstacle exactly a C 113 functions, which is another clear indication of the optimal regular. But I don't have such a nice application for the case of the obstacle problems. So in fact, it's very interesting that this community in machine learning are somehow rediscovering these things that analysts know for quite a while so, for example, machine with me extensions this is work by machine and with me from the 1930s. So this is quite old work. And, of course, the equation had to wait for the invention of the discuss the solutions in the 90s to be fully understood so on so on. So everything that could be done in the context of C to solutions, but then only with the advent of this cost to solutions this was fully understood in particular the uniqueness proved by Bob Jensen in 1993 or 1994 was really a cornerstone in the in the field. We've proved recently there. There is now much simpler proof, almost an algebraic group of the uniqueness result. But in terms of regularity, we're still pretty much in the way we were before. The work of Evans and smile from Sabine is tremendous, but I mean the improvement improvement is not at least in higher dimensions is not spectacular so it's just from. So even if you consider thinking about what the US to want for example, you still have no any further right right for that case like yeah. I think we're sorry to call it a homogenous case. Probably have you want to come up with different ideas. What about the best regularity question. What kind of you, whether you have the one deletion on their extension problem, and you change a little bit in some notion like the function in the wonder. You know, it changes the solution for extension like it changes like, you know. So, for example, if the function stops being Lipschitz because just hold up for example, there's something I was thinking about like, but you have a map from the space of functions to a nebondary the space for the contents in the browser. Yeah. So you want to know what it's not like it's not this continuous with some norm in the space. You know, general theory. People have tried to, to study the whole extension problem so you have a whole their function on the boundary then how can you, how can you extend to the interior. And so the cons would be replaced by cups. Yes, in that context, but I don't think there are many. So in the way of like, like in the machine learning the science kind of text because it's probably you're going to get like a completely accurate values in the boundary of how the approximation so you want to know maybe it's like. Yeah, because of course these guys work with approximations they work in this big case and so on. Yeah. Any other doubt in the summer. Okay, thank you. Stop the recording now. Stop the recording.