 So, it's a great pleasure to be here. I'm really delighted. Thank you to the organizers for having me on this wonderful occasion. Frank was a student of mine. I don't know if everyone knows that. But, nevertheless, I discovered today that someone knew you before you were a student of mine. I thought I was the oldest person. So, you know, and you can see that he was a student of mine because, you know, normally, thesis advisors have an influence on their thesis students. And you could see what an influence I had on Frank. Indeed, he arrived a little late. So, it's a proof, the proof, final proof that I influenced him. So, of course, having Frank as a student is really a luxury. It's really a luxury. So, you know, it was really a wonderful, wonderful experience. I remember, so, you first took my course at the E.C., my first year course at the E.C. in 1982, yeah, and then in 1983. And then you came to my graduate course, and then you asked me for a subject. So, it was really wonderful. Since then, I remember at that time I was working on grand states and scalar field equations and instability results I had also. So, I suggested to Frank to look at some problems I saw very difficult. And also, serendipity played a role because one of the days in which Frank came to my office, we were going to discuss about these questions. And I just had an envelope on my desk. I opened the envelope and I saw the paper of Jeanne Brunvello. And I said to Frank, have a look at that. It looks interesting. So, just by pure chance. And so, and of course, I was really amazed by, very, very early on by what he was able to produce and what he came back with. And we all know what a wonderful career he has had ever since. But I also want to say that Frank is, so we also worked on other topics together. In fact, at some point at E.C., we were recruited to do some consulting in industry. It was for a helicopter firm, which is now called Eurocopter, the biggest producer and maker of helicopters in Europe. At that time, it was called Aerospatial near Marseille, Miami. And we were supposed to work on modeling how blades of the helicopter rotor are deformed by the forces. It's a nightmarish problem, I tell you. It has all the, everything changes at the same time. And we were very brave enough. I think I want just to say that Frank is someone who is not afraid of anything. You know, I'm a scientist, nothing, I'm afraid of nothing. And so we started, basically we started to reconstruct the elasticity from scratch. And I remember at some point, we were having, we were making these computations and it became really, there was a book of computations. We were not afraid of doing developments, which took, you know, expansions, which took, you know, 20, 50 pages. And after that, we very much understood why in elasticity people make approximations. That was a very good lesson. But it was great, it was a lot of fun to do that together. We, you know, we almost always were working on this in the plane going down there. We were arriving and thinking what we were going to say. It was a very fun experience. And the helicopters nonetheless continued to fly in spite of our activity. So, okay, so when I met Frank, he was more or less like this wonderful painting that Rebecca, his wife, has done, really marvelous painting. I think she captures, of course, something very important of Frank. We'll recognize this. Okay, so I want to tell you about some ongoing work. I'm doing with Cole Graham from, who is now an NSF postdoc at Brown University. He was a PhD from, I met him with Stanford. And we have been interested in proving results about semi-linear elliptic equations. I'll just immediately show you these equations. But as we went along, we discovered that there's really some kind of general principle there, which we like to call the stability compactness method. And I want to try to show you to illustrate this on some of the results I'm going to present. But I think the method in itself has an interest. So stationary states of reaction-refusion equations. So reaction-refusion equations is one of the simplest nonlinear equation, PDE equation you can think of, which is dTu by dt minus Laplace equals f of u. And this equation is a basic building block of mathematical biology. It's really an equation that appears, systems are equations like that, appear all the time in biology, in ecology, in other areas of modeling. And so often when you think about these equations, it's studied in all of space if you want to understand propagation. But often also you want to understand how the domain of propagation will influence the phenomenon you're studying. So you are then led to study an equation which is in a domain, and I'm going to specifically think of an unbounded domain. We are interested in positive solutions. And of course then you have to look at the boundary condition, to impose a boundary condition on the boundary of omega for it to make sense. So the general question I want to discuss is, when can you say that positive bounded solutions is unique? Positive bounded solutions is unique. That's a question I'm going to discuss today, and for a restricted here to positive reaction terms f. Now there are many different positive reaction terms that come into play. I'll soon mention two with the KPP term, which has been the object of enormous amount of studies, even recently, even nowadays. And there's a positive term, combational linearity, bi-stable also examples, but I restrict myself to positive reaction. And of course in terms of boundary conditions you have Dirichlet condition, Neumann, Robin condition. I'm going to focus here on, sorry, I'm going to focus here on the Dirichlet condition u equal to 0. In fact there are many open questions for Robin, not for Neumann, but for Robin conditions. So I'm going to discuss the Dirichlet condition here. So I'm interested here in positive reaction terms. So that means exactly what is written here. I'm going to consider solutions which live between 0 and 1, values between 0 and 1. f is something which has two stationary states, constant stationary states, 0 and 1. And I'm going to assume that f is positive on this interval and that the derivatives here have signs. f prime of 0 is positive, f prime of 1 is negative. I'm going to assume this. And so that's a general positive non-linearity I'm going to assume here. And in fact what you want to understand here is when you look at this equation, in some sense what you have here, you have two contrasting effects. One is the fact that f of u is positive, so there's production owing to the reaction. And then the boundary condition means that you have absorption because of the boundary condition. And those are two constructing effects. That's why Dirichlet boundary conditions create a certain interesting phenomenon. So let me first start with boundary domain. What is known for boundary domain? So in fact boundary domains in general you don't have uniqueness. So here's a result that says that if you take any bounded smooth domain omega, then you can always find positive non-linearities, non-linearities of the kind I just wrote before, for which you have at least two positive solutions. And the domain is just, you know, you cannot hope to have uniqueness as such. And these examples that you can, that you used to do that is this type of non-linearity. So the type of non-linearity is like the camelback, if you want, there's two humps. And the way you construct this once omega is fixed, once omega is fixed, so when you construct this, you first find a solution with values 0 and 1, basically you minimize the energy when you bring this down to 0. So you have a solution, so the maximum principle will tell you it's below, it's below the value where it turns back. And then if this hump is large enough, then the overall minimization will yield another solution whose maximum will be rather here, one here and one there. So clearly just positivity doesn't suffice. Okay? Is it true in general? So this is what we want to say, but before I go further, let me just single out two subclasses of positive reaction terms. So one is so-called KPP, often it's called KPP, let me call it weak KPP condition. So weak KPP condition is one where F lies below its tangent at the origin. And often in all this propagation and so on what is really used, that's the property that is being used for all the propagation results, traveling waves, et cetera. Let me call here strong KPP, something which is a stronger assumption. I'm going to assume that the intercept here, the slope of the intercept is decreasing with S. Okay? So F of S over S strictly decreases when S is between 0 and 1. Okay? That's what I call strong KPP. Strong KPP contains, this contains the weak KPP case of course. Okay? So for strong KPP, the previous result doesn't hold, there is uniqueness. And let me just show you, you see the example I had was this one and what we are saying here is that if this thing is concave if you want, but it's more general, then there is uniqueness. So this is an old result of Rabinowitz who proved that in this case there is a bound, Omega is bound and smooth and F is strong KPP, then the positive solution is unique when it exists. And I also had the proof of a slightly more general result some years later with a little bit different method. But let me just say about this result of Paul. The idea is very simple. The idea is this. Suppose you have two solutions, u and v. Then you look at the ratio v over u and because Omega is bounded and smooth, because Omega is bounded and smooth, this ratio is bounded also. You can prove it's bounded. That's really how flammable if you want or something like that. And then what you look at, you look at the maximum of v over u. Basically that's what, so with some algebra here, looking at the maximum of v over u, you find that, and then using the strong KPP property, you can prove that v over u is less than or equal to 1. It requires a little bit of work. It's not as straightforward as I wrote it, but still that's the idea. So strong KPP property is used here to do that. That was Paul's result. And therefore, once you know that v over u is less than or equal to 1, you have v less than or equal to u and therefore, v is equal to u because you can reverse the choice of v u and v. The symmetries need to be, if number 1 is less than number 2, then the 2 are equal. So the question is, so the question is, is this true? So there will be several things, but first of all, is this non-uniqueness true in unbounded domains, but in KPP cases, uniqueness true in unbounded domain general. It looks such a nice and natural result you tend to believe that it is true in any unbounded domain. Now it turns out that the story is a bit complicated. So this is a question, what about general unbounded domain omega? When can you say you have uniqueness or non-uniqueness? And I'm going to consider two different topics in my talk. In the first hour, I'll speak about the strong KPP case. In the second hour, I'll speak about general positive, not in the reaction term. But okay, so don't worry. So let me start with the strong KPP equation in unbounded domain. So strong KPP equation, I remember this means that the intercept is decreasing, fs or s decreases with s, and we will, so we are going to prove uniqueness in this case in unbound domain, but at this point, in full generality, we have a non-degeneracy assumption, and I want to discuss this non-degeneracy assumption. So for this non-degeneracy assumption, we need spectral properties. So spectral properties in this case, when you have a weight Q and infinity weight in omega, if you look at the operator minus Laplace plus Q, you can define, there's no difficulty in defining in the classical way the principal eigenvalue of minus Laplace plus Q in omega just takes the infimum of the Rayleigh equation and this is true for a general open set. You don't have to worry about about, of course, and if you want to say that satisfies the equation, et cetera, you need to have some assumptions of regularity. But this definition is lambda of Q. As we also played with Neon-Berg-Raden, we have another definition, which is more general and in some sense we, this definition, some of the properties we proved there, in particular with Rossi, extending this, extending this to inverted domains, we are going to use that. So let me just also say that lambda 1 minus L omega is the limit of the principal Dirich eigenvalues in omega intersect Br, when R goes to infinity. So we have this property. That was essentially proved by Agmon already at the time when Frank was starting So, first of all, let me just say about characterization. Why does this eigenvalue play a role? Well, you can characterize the existence of a solution for this problem, for the KPP case with this lambda 1 and when you compare it to f prime of zero. So comparing it to f prime of zero means, so remember, we are in a domain omega, we have Dirichlet boundary condition and when we compare this to f prime of zero, it amounts to saying whether zero is linearly stable or not stable. That's what it means. So the solution, the result is this. If you f is weak KPP is enough for that. If lambda 1 is less than f prime of zero, that is zero is unstable solution, then there is a positive solution, whereas if it has no, if lambda 1 is strictly greater than f prime of zero, then there is no positive solutions. So zero is stable, in fact, it means everything will converge to zero if you look at the evolution equation. So this is the result and let me just point out that even at this stage, the critical case lambda 1 equals f prime of zero in full generality is open and we believe that f is strong KPP, then there is no solution. And when lambda 1 equals f prime of zero, but even this simple looking question remains unsolved. It's true in bounded domains. In bounded domains, we'll prove it directly. It's very simple to show. Okay, what is the difficulty with unbounded domains? I should have said this, maybe let me come back to this. Let me just explain, just simply the, oh here maybe it's here. So you see in unbounded domains you may have regions of the domains extending to infinity with different branches, different parts of the domain. In some of these domains, lambda 1 if you restrict to this branch of the domain, maybe bigger than zero, than f prime of zero, or be in other less than f prime of zero. So you may have existence because lambda 1 will be bigger than, will be less than f prime of zero in some region. But if you go further, further along this branch where lambda 1 becomes less bigger than f prime of zero, then it means that in this direction the only thing you can see is that you will go to zero. Right? Because in the limit the equation is an equation with lambda 1 bigger than f prime of zero, so you expect this to go to zero. So it's very different to remember when in bounded domain case we were working with v over u in an unbounded domain case v over u can have very strange behavior at infinity and you need to control what is happening. You don't know, there's no way you can prove a parallelism that v over u is bounded. So let me introduce the condition under which we can prove uniqueness and as I just explained the behavior at infinity plays a major role. So we're going to look at all possible limits of omega. So you look at, you know, you look at translates and you see a long sequence of points and I'm going to call sigma, we look at connected components, connected limits. So we only take you know in the limit we have say domain which takes two branch, if you go in this direction you will have two distinct branch connected. So by connected by limit I mean only the connected components of these limits, separate components. And then we can define what we call the limiting principle limiting spectrum. So this is, we call it sigma star minus Laplacian omega and this is the principle eigenvalue of minus Laplacian in omega star for any possible search limits. So it contains the domain itself, the domain itself is one one suspended but it contains all branches when you go to infinity. Then the theorem we prove is this. Suppose we are dealing with a strong KPP type suppose that omega is non-critical in the sense that f prime of zero is not in the closure of this generalized principle spectrum. Then the solution positive solution is unique. I want to add right away that we conjecture that this should be true for any domain, any open domain at least we can say uniformly smooth domain avoiding pathological situation. But this is not normal. I'm going to give you examples to show, but our result is generic so in some sense you can say that generically the solution has a unique solution the problem has a unique solution in unbounded domains. I'll try to explain to you where this condition comes up now. So first of all what's the non-degeneracy condition? So here's a domain omega. Omega is here plus some t's here going to infinity so it's like a comb right? So this is a connected domain omega and in the plane so non-degeneracy means that h i will stay away from pi over square root of f prime of zero so infimum of this will be bounded away from zero infimum of the difference. So the degeneracy means when one of these two becomes critical and at infinity you just have lambda one equals f prime of zero. So lambda one is just h it's just pi over h square. So this is the conditional non-degeneracy and as you see it's really generic and what do we, how do we use this? So let me give you an idea of the proof how we do that non-critical domains. So the first idea is this when you have such a domain which is non-critical then you can always then there will be a spectral gap. Okay so it will be a spectral gap, that is no limit will have an eigenvalue which will become close to f prime of zero. So there is a gap if you want between those parts of the domain where lambda one is below f prime of zero and one where it's strictly above it. We can then use this gap to decompose the domain omega into two parts. So as a union we write omega's union of omega plus omega minus and omega plus is one which lambda one is strictly above f prime of zero and omega minus is one where sigma star is strictly below f prime of zero. So in fact what we want really is not only that but we want that sigma star of all possible limits all possible limits of this set omega minus be strictly below f prime of zero. So that's the composition. Now this is not simple, it requires quite a lot of work and it's rather delicate and for this to prove this to prove this composition we rely on a wonderful result of Eliot Leib. Eliot Leib was mentioned today by Thierry Jean Marquis where is he sorry so anyway Thierry mentioned this this morning and indeed this wonderful result of Leib was published in Dimensiones the same year all those things happened when Franck started his thesis so he chose this result accordingly and what Leib proved is a really wonderful result not known some reason people have forgotten about this result it says this it says that if you look at the principle of the Laplacian in a domain then if you take a ball b domain another domain b and you if you want to compute lambda 1 of minus Laplacian a then up to an error which will be lambda 1 of minus Laplacian b you can just find more or less lambda 1 of in a by moving around the window so you can think of b plus x you have your own body domain a and you can think of b plus x as windows that you move around and what Leib says if you move around your window eventually you're going to find lambda 1 of in a which is a wonderful result so this result says that it's localized it says that the eigenvalue is local you can starting with unbounded domain you can somehow pin down up to a certain error you can pin down your eigenvalue by looking in a given window so remarkable result very deep result and that's what I wrote here lambda 1 is local so in particular lambda 1 minus Laplacian omega wrote it for b r of x what is lambda 1 of b r of x it's constant over r square so basically lambda 1 minus Laplacian a big domain unbounded domain omega is like Laplacian omega intersect b r of x provided you choose your x you place your x smartly okay so so here's the decomposition I just want to point out two examples of decomposition so here's the domain so these are large chunks and they are connected by two channels which are becomes longer and longer and are thin, thinner than the critical value and therefore you see what's happening in the limit what you're going to get is you take this part this is going to be omega plus omega plus you should think of it when lambda 1 is big so those are thin parts narrow parts of your domain and omega plus on the contrary ample parts of the domain so in omega plus you're always close to the boundary if you want omega minus could contain large balls that's what the domain is and here's an example of such a decomposition in this domain that I drew before the composition is not unique in the overlap of course and so on but you need to construct really rely on this result of Libre and how do we use the composition now so we use it in the following manner so let me let's take again the proof that I outlined in the bounded domain the proof that I outlined was to study u over v so u over v is going to pose a problem only in the ample part so because in narrow part because of this condition of spectral gap u over v u and v cannot go to 0 they can go to 0 only in these large parts and therefore you get it is balanced so it shows that u and v cannot go to 0 and it works in this part in omega minus in omega plus omega plus is in narrow part omega plus lambda 1 is bigger than f prime of 0 and then the problem is linearly stable as I said before so you see what's happening because lambda 1 is bigger than f prime of 0 when you look at you can because of the KPP property you can use the maximum principle there basically the maximum principle even in some bounded domain is essentially when you have lambda 1 positive of the right problem but here even though it's non linear you also have the same property therefore what you have from comparing u and v in omega minus you can then transfer to omega plus because there's a maximum principle applies right so it's like a narrow tube but inside this narrow tube at the boundary at the initial boundary of this narrow tube you already know from the other part so this is the essence of the stable compactness method that I mentioned you you first use you first use the maximum you first use compactness in some sense to obtain a bound in this case it's about u over v the best constant kappa so that kv minus u is positive and then you transfer it by using the maximum principle in the region where you have stability so that's where I go stability so in fact it turns out that this method is very general and can do a lot of things for you so let me now move to so this is the result we obtain for the KPP case let me now move to general positive general positive non linearity so let's forget about the KPP assumption we just look at f merely positive as I drew before and remember in a bounded domain for such f there is no uniqueness in general in bounded domains what can we prove here well we know that in fact it's not difficult to show under our assumption f prime of zero is positive plays a role here we know that any bounded positive solutions of mysla plus v equals f of v in our n is one the only positive bounded solution is the constant one this you can show it's a consequence of maximum principle it's not difficult to show and it requires a proof of course but you can prove it's a kind of Louisville theorem if you want so the question is this you have in bounded domains you have non uniqueness we have seen examples in all of space you have uniqueness what's happening here what is what is the limit between the two so let's investigate this question sorry so let me start with result an older result about Lipschitz graph domains bounded by Lipschitz graph so epigraphs defined by y by xn bigger than a phi of x1 x2 xn minus 1 so it's a domain which is epigraph in Rn where phi is uniformly Lipschitz on Rn minus 1 globally so this is the same Lipschitz constant as all so what we prove is Louis Kaffarelli and William Berg that was a paper in CPM 97 we prove that if you take such a domain bounded by uniformly Lipschitz graph and f of positive type the same type of f that I showed you at the beginning then the solution is unique solution of this equation is unique and not only is unique but then you can prove that as you distance to the boundary increases you will converge to 1 the solution will be stable and u is also monotone with respect to xn in fact then u is monotone in the corner of directions because when you have uniformly Lipschitz graph you can also tilt the direction and still have a Lipschitz graph so it's going to be monotone in the corner of directions this by the way we are led to that we are led to that by questions of regularity of one phase free boundary problems this was we are working on models of combustion, the simplest model of combustion and we needed to prove that the free boundary has a certain smoothness and to start with if you do a blow up you have certain Lipschitz functions Lipschitz you know that but you want to think that the free boundary itself has a certain regularity so when you do a blow up of this problem the Lipschitz constant remains the same you keep the Lipschitz character and what you want to know is that you have the fact that the solution is a one phase problem so it's zero on one side and you want to know that it's increasing on the other side and the fact that it's increasing and that you have this corner of directions give you the starting point of regularity after that of course you can obtain further regularity using the risk of wireless methods as many of you know how to upgrade but the first starting point was this one so that plays a very important role for that and so this was really the first step and this that's how we run into this problem of graph but the question is can you say can you say what can you say beyond this so to do that let me explain the proof we have with this Louis and Louis in this work and let me try to understand how we can extend it ok so so the first result is this one so the first result is to prove to prove that what's happening away from the boundary so the first thing is to say if you if you choose well there exists delta and are zero so that if you are at a distance at least are zero away from the boundary your solution will be bound away from zero which means you cannot converge to zero at infinity in this case which of course will mean the second step that you will converge to one at infinity as you move away from the boundary right because it's bounded away from zero you cannot stay away from a zero of f you know so it will accumulate mass so this cannot happen and therefore you get these two estimates away from the boundary first of all if you are at a certain distance away from the boundary you are below you are above delta and then if you go further away from boundary actually you converge to one so which means that the action takes place in a finite neighborhood of the boundary in some sense so the next lemma of this is the sliding method that's how we we introduce this method in this framework and sliding method that means this you have your original graph and so you translate your graph upwards so I'm calling omega h so translate of omega by h en in the n vector the basis vector and I translate v accordingly which I call vh so which means translated upward which means vh of x prime xn is v of x prime xn minus h it's defined in omega h the one when I push the graph upwards and so the next step is to prove that for h large if you translate things far away from the enough far away from the boundary but for finite h there will be a finite h where such that vh will be below u in omega h and the reason for that is that by the first steps you can if you translate far enough and because it's Lipschitz by the way uniformly Lipschitz when translated far away you are uniformly far away from the boundary something that does not happen in the non-Lipschitz case so we are going to use the fact that f is decreasing near 1 that's the assumption f prime of 1 negative and then we say in this region u is bigger than 1 minus h then you compare u and v in a region where f prime is negative if v is touching u it is happening in the region where u is almost 1 in this region f prime is negative so you compare two solutions of a problem which has the right monotonicity when f is decreasing minus l plus minus f has the right monotonicity you can compare so the maximum principle tells you in this region the stability of 1 ensures stability again ensures that vh will be less than because vh is equal to 0 on the boundary of this set and u is positive then this will be translated the maximum principle will apply because we have reduced the problem in a region where u lives f is strictly decreasing when you do that so that's the next step that's the sliding method next step means you start reducing h you bring back your h down to 0 so you start with the h very far away for which you know this inequality and you bring it down to the maximal minimal position of h for which you have this inequality same inequality what is happening as such an h star so the h star is this so you see the h star you still have you still have the inequality at the limit value and basically what you can say is that you can push a little bit further down if h star is not 0 you can push a little further down why? because far away from the boundary but at a fixed distance things will remain the same as before and when you bring it down a little bit more it's only happening at a fixed distance from the boundary so you have compactness in some sense this is these two ideas it was not apparent before that's still another way to use that you have the stability that you use and compactness to do that so then for you bring it down and then you get u bigger than or equal to v and therefore u is equal to v that was the proof we had in this result with Nielberg apparently so that's the proof and also proved by the way it proves also monotonicity as well so as I pointed out to you describing this method this proof of uniqueness strongly hinges on the fact that the graph is uniformly deep sheets because you know if you do translates you're going to get in another graph you're going to get close to the boundary so the whole thing dies down the whole proof breaks down so in fact this was for instance y equals x squared in a parabola is it unique just was open this problem was open for up to now and what we prove is this we have a generalization of this results that says suppose your domain is asymptotically flat so it doesn't mean that u is lipschitz just means that if you look far away up to a rotation you have a flat domain so asymptotically flat is this so for instance a parabola is such a domain then if a make is bounded by an asymptotically flat graph and f is a positive type then the solution u is unique and converges to 1 as it goes to 1 and it's also monotonic with respect to xn so the whole thing carries over to this case so this result we prove is cool and just to give you an idea what we have to devise a new strategy for that so what we did with score was this first of all the epigraph structure means that if you look far at infinity omega will converge locally up to an isometry to half space so by previous results we know that half space is contained in the result I just showed it's a particular case of ellipsis graph f is equal to 0 so in half space the solution is unique and it's stable so the stability was not in the previous paper we had to prove that but anyways we have a unique solution which we call cap little f and f actually only depends in one once you know it's unique it's obvious that it only depends in one variable because otherwise by translating it parallel to the other to the other directions we would have as a solution so it's independent of all the other variables by uniqueness so phi only depends on xn and therefore phi is the solution of this ODE on the real line starting at 0 and positive and going to 1 at infinity so it's a unique search phi and what I say is that in fact with some work we can prove that this solution is stable not only is it stable as a one-dimensional solution but also when you consider it as a solution of the half space problem then you can prove that this solution is strongly stable some property which is important so therefore we expect because of that we expect the solutions to converge the solution if you take the solution we expect to converge it to this one what does it mean? It means that if you take capital phi to be little phi of the distance to the boundary then you expect that you will converge to this phi uniformly in fact this is true you can prove it that's not so difficult you can prove it because you have to take this distance of x to the boundary because you don't know which directions you are going to look in you want to allow every direction okay so that's a lemma any solution converges uniformly in omega and let me okay so let me skip this and the next thing as I said that the asymptotic, the exact half space solution is equally stable therefore we hope that for phi it's going to be the same thing at infinity and it's true but it's only asymptotically so what we prove here is that when you take the solution phi look at the linearized problem so if you have to look at minus laplace F prime of capital phi then the principal eigenvalue of this operator in omega but away from a ball of distance r, so omega minus B r, B r is a ball of radius r the lambda one is strictly positive so this is the stability result we have and this again uses Leib's result because it allows us to study windows now in the complement of B r, so it can be both for the scale let's say and you can fix this and in these windows because you have local convergence remember this asymptotically flat domain means you have local convergence if it converges uniformly there's no question here and therefore by local convergence and the fact that Leib allows you to use local local windows where then you have this convergence again you know this quite without Leib would be would be difficult here to it's a subtle property anyway so this what's happening and therefore we have in fact we have this so it's positive way in fact you also can prove by using Leib's result the other inequality is not difficult that you actually convergence of this as r goes to infinity okay so now we'll take two solutions and we're going to do the same sliding arguments that I outlined before for the uniform in Leib's case and again U and V are almost equal to phi at infinity so outside the large ball because phi is stable you have the maximum principle that's really the main idea here outside the large ball you have the maximum principle and therefore now everything will reduce once you choose your r sufficiently large everything will reduce to studying what's happening in omega intersect B.R. ah but omega intersect B.R. is compact then you can indeed compare U and V H because by compactness and therefore because of this you have this stability that will allow you to transfer whatever information you have on omega intersect B.R. to the rest of the ball translating U, V downward and keeps this in equality because you only have to look at what's happening in the bounded domain so you can slide it all the way to V and therefore you get U is beginning to V and therefore uniqueness so you see this idea is again uses this compactness and stability decomposition so we can go further we can go further and this is we like to call it Misan abhim Misan abhim is when you have two mirrors facing each other they reflect to infinity we can go for one mirror let's say here at the time for two mirrors at most so Misan abhim means if your domain so we prove this asymptotically flat it's okay we know that for uniform relief sheets it's okay then if the domain is asymptotically uniform relief sheets which means locally it converges to relief sheets graphs so locally it converges to uniform relief sheets graphs with a fixed constant but it doesn't converge globally then you have the same uniqueness and so on so of course this begs for the open problem which I present to you is uniqueness does uniqueness hold in any epigraph domain it's very tempting to say yes at this point but it's open and we have to use these deep results to prove that even it's a very simple question you almost ask it to a student for instance like Franck my office say oh Franck by the way can you prove okay so does uniqueness hold in any epigraph domain okay so by the way what is so special about epigraphs remember we saw that in bounded domains it's not true in general we saw in all space it's true in general and we have proved it that in again in epigraphs the full result is still elusive eludes us we have proved that in epigraphs it's true basically so what is special about epigraphs you can say what are you talking about epigraphs take any unbounded domain is it true in any unbounded domain the answer is no because typically what you can do is to choose one of your domains for which you know there is no uniqueness choose an F a domain omega not remember it's not KPP because KPP we treated separately right so we take an F for which we have no uniqueness take an omega zero in such a domain and therefore once we do that because the solutions we constructed in the bounded domain omega zero are both stable there you can construct actually solutions in the whole domain omega zero connected to this by this long neck and this solution one will be close to the lower solution omega zero and one will be close to the upper solution so no uniqueness in general so and in epigraph this doesn't happen so the structure the geometric structure of an epigraph which doesn't allow you to hook up a pocket from below to the domain plays a role can you give more general geometric descriptions that's also an open question very natural open question in this framework okay then there we have a bunch of other results let me just quote a couple so for instance we can always say that what we call exterior star domain since star domain for instance you take a star shaped and compact and star shaped about the origin say then the result says that in the exterior of such a domain there is always uniqueness remember that could be inside the domain there is not necessarily uniqueness but outside there is always uniqueness so it really has to do with the fact that there is a lot of room outside I'm not going to again this is the same type of technique except that of instead of doing sliding or doing or doing multiple I just do here scaling well scaling or dilations of the variables independent variable so looking at u of x divided by lambda and this is a solution because of the sign of f and therefore you can do the same argument and you can start moving lambda to a critical position where you can compare the two and then you get uniqueness so if you think of all those things which I have described to you here they share the common trait they are all this is why we like to cause them stability compactness method basically general idea is to which allows us to get uniqueness but not only that I didn't mention that but also stability properties I didn't mention it the proof of how we get stability of the solution in ellipsis graph is something which relies on this on this method as well and so stability symmetry and uniqueness and so on and we decomposing domain two parts one which is compactness doesn't mean the domain is compact but you can use compactness results there another one where you have stability and the stability allows you to extend whatever you have reaped from your compact part and extend it to the other part as well so then the tools that you use may depend from a problem to another it can be sliding it can be scaling, it can be dilations it can be moving plane method in the version I gave with Nielberg the moving plane method so the moving plane method some of you may know we want to prove that it's a theorem that says for instance minus laplace u equals f of u in domain omega u is positive in omega and u is equal to 0 on d omega the only assumption is that f is c1 the moving plane method says that if omega is symmetric let me write it roughly vaguely let's say that u inherits the symmetries of your domain and that's done by the moving plane methods and the Alexander moving plane method and the version we gave with Nielberg of this method consists in having a plane the same as the Alexander method you take a plane, you take a reflection of u inside this is going to be u of x is going to be this reflection so x lambda is this reflected point x n equals lambda and you want to study the sign of v let me call it v lambda minus u and once you know the sign of this if you can continue to prove the sign all the way to maximum position of symmetry then you get that one is above the other and therefore you have symmetry that's when lambda is equal to 0 v is actually equal to u because if you go away to the symmetric part to lambda equal to 0 you will prove that here this u here is less than u there which of course means they are equal now the proof the way we use that with Nielberg was to say ok so the original proof rely on very heavy analysis of corner points because you want to apply the maximum principle of corner points here and not general for instance the result was not known in a square take a square is a solution symmetric that was not known from the moving plane so with Nielberg we have this idea of at the maximum position we take out a compact set k here and we say ok if you are at the position where v lambda minus u is positive then it's strictly bounded from below by positive constants in k therefore you can move around further on with lambda decrease the lambda still keep this in the compact part and here you see by construction up to the point where you have the maximum position v lambda minus u is bigger than 0 on this part and here it's positive because u is equal to 0 so on the boundary it's positive and here that's what is inherited from the compact part there is a boundary here where v lambda minus u is positive because it's compact and we carve this case so that the domain which is left here a small volume small measure and the maximum principle applies in domains with small measure or if you prefer the principle eigenvalue goes up to infinity in this case that is what we did with Nielberg in this case but you see this really belongs to the same general class of stability compactness decomposition so we have seen it in very different glises ok so I used up my time I wanted to say that what we have seen is this general aspect of this method and it's combined with other tricks you can think of many other things you can compare v and lambda u or u with dilation of u with lambda or u in sliding or u moving plane reflection you can compare this is a general principle here as soon as you have a device where you can introduce a family one parameter family then using this decomposition even in unbounded domains will yield these type of results and there are of course open questions we have seen examples from these but there are open questions these are true in general epigraphs and I should point out that nothing is known for Robin conditions so you would expect that in a nice epigraph resolving conditions it's completely open and many other questions like stable solutions for order the examples we have they are but is it true in general that's a natural question in fact to ask so with this thank you for your attention and thank you we have questions coming from a reaction to diffusion it's f of 0 equals 0 f of 1 equals 0 but in other type of problems like nonlinear optics with saturated nonlinearity f of u will be just bounded f of 0 will be 0 anything you can say about that so so that the good point it's true that you want to look at many what I can say that you can use this method of non-existence of solutions in certain domains so for instance if you have a bounded solution if you have an f like the one you just described f which goes like that doesn't go back to 0 suppose you have a bounded solution of this so the solution is going to be in this interval of values from 0 to 8 let's say then I can always bring back my f the solution I have will not see the difference yet my result will say there is a unique solution and it goes to 1 at infinity therefore it's not the one you had before okay so this type of results allow you to obtain to obtain non-existence result for many other types of nonlinearities like that so indeed the scalar field equations I mentioned you can also make some discusses what is true is that I should say that I made the assumption that f prime of 0 and f prime of 1 have signs f prime of 1 I am annoyed that we made this assumption but we made it so that's why I try to get rid of it f prime of 0 is a more complicated business because there are cases where all solutions are in all of space I said that one is the only solution while you can find solutions which are bounded of minus Laplace U equals U to the fifth in R3 you have such a choice that go to 0 at infinity these solutions will be solutions of a problem like that so you don't have uniqueness that's an example when f prime of 0 is equal to 0 your f prime can be strictly negative that's not a matter but your f prime of 0 will be 0 and you have no uniqueness I believe that we should be able to extend this condition to say that the way that f of U behaves near U equal to 0 maybe we can go beyond linear I assume that f prime of 0 means linear behavior I think we should be able to go to like a power below the fujita exponent or maybe even below the critical exponent I don't know but this for the moment is open so that's my comment about other types of nonlinearities that's a good point it's really possible to use your framework when it comes to inhomogeneous Dirichlet boundary conditions say for example a v which is also about strictly non-negative which is non-zero yeah, on the boundary that's a very good point I guess you should be able to do because in the end what we want is to compare v and U so if you have a part where let me give you an example of this for instance I want to compare v and so it depends what problem but let me think about the problem where I want to compare v and lambda U or U and lambda v I want to show that U is less than lambda v for any lambda bigger than 1 which in the end shows U is less than equal to v so this if you have a g for free at the boundary all our problem was to worry about what's happening when they are 0 but in those parts where it's not 0 it works perfectly well so you have that now that's one part now for the sliding method it doesn't work I'm not even sure the result is true because for the sliding method one of the things we had we needed to know was below that U was above v shifted by h in direction n when U is 0 when v is 0 there on the boundary then you have it freely because U is positive v is 0 when v is non-zero then you don't know so if you have a way to say a priori that in the direction extend v is above g so for instance suppose you have a sub solution that allows you to say that any solution is going to be above this g then the same proof works but you would need to have this information that's a very good point but we need this type of information