 Hello, hello, yes. OK, so welcome to this afternoon session. It's a pleasure to introduce José Antonio Carrillo, Imperial College London. And he will give how many tugs are you giving? Two tugs. So a short course on the degenerate Keller-Siegel model, fair competition and diffusion dominated regimes. OK, thank you very much. Thank you to the organizers for the kind invitation here. And so today I want to report about some recent works in the direction of the Keller-Siegel models with degenerate diffusion, mainly about stationary states for these kind of problems. And I will basically concentrate on two particular regimes that I call fair competition and diffusion dominated. You will understand in a few slides what they refer to. OK, so what's the outline of the two tugs? So first, I would like to explain you the motivation for this, from where I came with these kind of questions and why I'm interested in answering some of those precise questions. And then I will concentrate first on telling you the different regimes and explaining you from where it comes. Then I will discuss on the fair competition case that, as the name says, it means that it's a kind of competition between two different mechanisms that is somehow fair. And then I will concentrate on the diffusion dominated case. OK, so I will do a kind of mixture between blackboard and slides. So I will prove some things on the board and remind you some computations on the board. OK, so the first part about the motivation. So what I want to do is to minimize interaction energies. We will see why I want to do that and why this is connected to those problems I mentioned. So let's start by just fixing a bit of basic ideas. So let's assume that we have m particles which are interacting by an interaction potential that I'm going to call u. And an interaction potential for me is going to be just a symmetric potential, which is c1, except maybe at the origin. OK, so there could be a singularity at the origin. Anyhow, we'll assume that there is some value for the interaction potential at 0. I will give the value 0, even if there is a singularity. And when I talk about the interaction of m particles, I mean this kind of interaction. You can think about it as coming from Newton's law. So I have m particles which are interacting through Newton's law, where you have the second derivative equals to the mass time acceleration equals to the sum of forces. And you can think this as the term that is coming from the sum of forces. So say the presence of a particle at position xj is producing a force onto the particle at position xi, which is in the direction of the vector xi minus xj, and with certain strength that is given by the potential u. So that's why you have gradient u xi minus xj. Then the mj's are just weights for the effect of each particle. Assume that they are equal, it will be 1 over n. The viewer will assume that each particle produces an effect of 1 over n onto the other. And then you are adding all the forces. So this is like the sum of the forces. But what do you have here, the first derivative? Because I assume that somehow the inertia term is negligible. And I assume also that there is a kind of dissipative term. So there is a kind of damping term in the equation. So as you write m x dot dot plus k x prime equals sum of the forces, then you are just neglecting the inertia and bringing then first order differential equation instead of a second order differential equation. So somehow you are assuming that the particles, quickly without inertia, they adjust their velocities to the sum of the forces. So it's the simplest interaction between n particles. Good. If you want to write now, instead of a particle system, you want to write a continuum model, then you want to pass to an equation for the mass density of particles or the probability density of particles. So I'm going to call rho tx the density of particles at time t, meaning by that this gives me the probability of finding particles at location x or then the local mass at location x. It depends if I want to normalize or not the total mass of one or not. OK. So if I want to write something continuous, I can just follow what I did here for the particles in a continuous way. So in fact, probably I should explain a bit more this relation. I don't know if somebody has talked about mean field limits before or not, but probably not. So what you can do is, given this n particle system, you can associate to the ODEs the following measure that I'm going to call it the empirical measure. I'm going to call it mu n, which is the sum from 1 to n of the weights mi and adidac at location xi of t. xi of t is the solution of the ODEs. So this, obviously, is a measure that depends on t. And it gives you exactly the value corresponding if it's a deterministic measure. If you want, in the sense it tells you precisely the locations of the particles because you have this combination of adidac deltas, OK? So you define this. This is usually called the empirical measure. And now if you look at the right-hand side of this, you can think about that, the sum mi gradient u xi minus xj, j different from i. Somehow you can think about this as the gradient of u combined with the mu n. Of course, it would be the case if the potential is smooth. And since it's a metric, the gradient of u at 0 is 0. So this is exact. And then the fact that I'm taking j different from i doesn't make a difference. So now you see more or less the idea. You see that if you think about this as the empirical measure, what you are expecting is that when you get more and more particles to approximate the continuum density row, then somehow this probably you could expect that this converges in some sense as measures to some limit. And if you are able to identify the limit, let me call it row t, maybe you can even say you can give the evolution for that row of t. So answering this question, answering if there is a limit and there is a law for what is the evolution of row t is what is called the mean field limit, OK? And there are plenty of works both for these kind of equations and for kinetic equations trying to derive rigorously macroscopic equations out of the dynamical systems. In this case, under certain assumptions on the potential u, you can do that. And in fact, now you see what you expect is that somehow when you take more and more particles, this will be approximating this gradient u comb of mu n. Hopefully, it will be approximating when n goes to infinity by gradient u comb of the density row of t. And then what you expect is to have a velocity field given by this formula, where you have the sum of the forces again. Every infinitesimal part of the mass of row is producing a force at location x given by gradient u x minus y. You integrate your sum, you get the total force. And then the velocity field is going to be minus that quantity, OK? So the minus is just by convention for the potentials how I'm using it. So in other words, you expect to have at the continuous level of velocity field, which is given by that convolution with the gradient of row with the gradient of u. And then another question is, what is the law for the evolution of row? And what one can expect is that the evolution of row is given by a continuity equation. And this is a kind of exercise for the students present in the audience. You can check that, at least if you use a smooth, let's say C2, we bound the second derivatives just in case. You can check that mu n of t is a distributional solution of this equation that they have there. So of d rho dt plus divergence of rho u equals 0. And u given by minus the gradient, sorry, b, I call it. And b given by minus gradient of u, combined with the density. With initial data, which is just the sum of the deducts of the initial locations, that's pretty easy to see. And that's a good exercise to understand what is happening. So it's not a surprise that you expect this equation at the limit because somehow already at the discrete level, they are solutions. So this can be rigorously proved also in the case in which u is not as smooth as what I was saying here, but this requires other techniques that I'm not going to discuss today. And anyhow, I want to concentrate on this kind of model for a while. OK, so the next thing I want to say is about the potential u. So typically, in many of the applications that you would like to use this, let's assume for simplifying our life that u is radial. So u of x is a function of modulus of x. Let's say k of modulus of x, OK? Typically, this k is a function of r of modulus of x. You're going to expect to be decreasing and then increasing. And what it means is decreasing means that the potential is repulsive at short distances with the signs that I chose here. While, so up to here, say, in the other particles, I'm located at position x. The other particle is at position y. The force that I feel is repulsive if I am at a distance less than this. Let me call it l, the minimum. While if I am at a larger distance, I will feel an attractive force. And typically, the potentials that are interested in many of the applications are repulsive attractive potentials. So the next thing I want to comment is about the repulsive part of the potential. Well, I really don't care right now. I could go to plus infinity or to a constant and have some saturation. These are, I will treat different cases. Both are interesting, yes, for different reasons. But today, mostly in my talk today, there are going to be potentials that are going to decay to zero, in fact. Yeah, well, but both are interesting, a priori. Then the next thing I want to comment is about the repulsive part of the potential. So another way of modeling the repulsive part of the potential is by, instead of doing it non-local, and now that's why I'm inside the theme. This is non-local. Maybe I can do also the repulsive part local. What it means to do it locally. So I can assume that the potential, I have two separated parts, say, a repulsive part. Let me call it U-capital R, an attractive part. OK, there's a symbol that you have like this. And for the repulsive part, I can take something that is really, really, really very repulsive in the sense that maybe I can put a kind of a scanning parameter there, epsilon. I can assume that this U epsilon, well, is somehow, as a epsilon, goes to zero very close to a dirac at zero. This will mean that it's really very repulsive, hardcore repulsion in a sense. I want to think about it like this. So formally, what happens with that equation there? So let's separate the repulsive and the attractive part. So I have the repulsive, the force will be given by. The velocity field is this minus gradient UA, combo of rho. And now, yes, I set to put this as scaling parameter. Now this approximately, then if I put a dirac delta here, of course, I cannot do too well the gradient of the dirac. But what I'm going to put is the gradient on rho. I combine with the dirac. So formally, this should be minus gradient rho minus the gradient of UA, combo of rho. So somehow, if I do a very repulsive force, at least in this way, I recover for the repulsive part this term minus gradient rho. If I look now at the equation, I'll write it on the other side because I have the minus. So I will have minus divergence of rho v. So it's divergence of rho times the velocity field. OK. So this is what? This divergence of rho, gradient rho. And then from the other term, I get rho, gradient, the attractive part of the potential, combo of with rho. OK. So for those of you that were here yesterday in the talk of Juan Luis, and I'm sure that probably it appeared last week, I'm not going to do chak-chak. But I can write here like Laplace of rho squared and here 1 half. OK. And then the rest. OK. Yes. And then the first term is exactly the porous medium nonlinear diffusion, the genetic diffusion spawn in 2. Probably something like this appeared yesterday. But it's the local one. OK. But at least formally, you can recover from very strong repulsion this nonlinear diffusion. So I'm going to discuss a bit more these kind of models. And I will concentrate today precisely in a model like this, where I have the attractive part I will consider non-local. So I will forget from now on about the A. I will think UO is an attractive potential. While in the repulsive part, I'm going to assume that it comes from a local pressure function. So I can put something more general than just gradient of rho. I'm going to assume that I have the gradient of a function of rho. Directly from the particle system, even formally, I can recover only the power 2. OK. At least in that case. So the question is now, if I do this, if I model repulsion by this nonlinear diffusion and the attraction with this non-local convolution of this interaction energy, when does a balance between these two forces happen? And in which sense, it means that you have a balance. So I would look for a balance between the forces. Typically, it will give me to a kind of a stationary state. So my question is, do I have a stationary state or not? What can I say about that? OK. Now the next ingredient. These equations, in fact, for the experts in the room, they know perfectly that there are particular cases, at least formally, of being what is called a gradient flow of certain functional with respect to vast sustained distance. OK. I'm going to try to explain just a bit what it means this in practice. And I'm not going to get into the technicalities of the vast sustained distance, because it's not the objective of my talk. OK. But just to give you an idea why there is something like a gradient flow behind. So I'm already introducing this Lyapunov functional. Let's check that it is, in fact, this energy. I'm going to call it free energy, f of rho, which I'm going to write in the particular case of a power. So let's do a power. And then you see how it goes. So on the board, I'm going to use p of rho equals rho to the m. So here you will see that the right thing to do is 1 over m minus 1. The integral of rho to the m. And here you have 1 half double integral of u of x minus y rho x rho y. By the way, I didn't ask, is it large enough? OK, thanks. Good. So yeah, let's check that at least formally. This is a Lyapunov functional for that equation in the sense that it should decay in time among solutions of the PDE if everything in life is good. And we can take all kind of derivatives and integration by parts. So let's compute formally what is the derivative in time of f of rho. In principle, this is just a function defined on lm functions, l1 lm functions. And I'm assuming that they have undue enough properties such that this is well-defined for l1 functions. OK, let's not enter into the details yet. So I want just to compute formally what is the time derivative of f of rho, where here rho now is a solution of this PDE. OK, let's forget about the a, as we said. So this is the equation, the time I'm aiming at, because I want to use p of rho. OK, somebody could have that. So this is not the right thing to do. It's m over m minus 1. Sorry. OK, the constants are not important, but let's write the right constants. That's the p of rho that it has to be in order that I get this equation. But it should be written there. Probably I should. OK, so the thing is, let's first, yeah. So I'm starting from this equation. So I forget the other constants. That's why I got it wrong first. So let's forget about the other things. So I have d rho dt equals this. And let's compute the time derivative. But for that, the first thing I'm going to do is to rewrite this in a different way. I'm going to write it, again, like I had it written before, divergence of rho times something. And if you recover, what did you have to do in order to get here? You see this one is divergence of a gradient of rho to the m. So in order to recover rho gradient of something, you have to multiply and divide by the right constant, which is this one. So you have here m over m minus 1 rho to the m minus 1. To take the gradient of this, you see the m minus 1 goes away with the rho n minus 2 times rho is rho n minus 1, with the m gives you a gradient of rho to the m. And here, I had just to write u comb of rho. So this and this are the same. So once I have written it like this, well, the first thing I'm going to realize is that all of this, I'm going to call it like this, and then I will explain what it means. I'm just going to call it like that. If you want right now, it's just a notation, which means variation of f with respect to rho. We will check that later. But nevertheless, I can write the equation in that form that is written there. d rho dt equals divergence rho gradient of the variation. Good. So at least, formally, what it means variation is that the time derivative of f of rho is going to give you d rho dt, the integral d rho dt, times the variation of f with respect to rho. This is if you want the definition of the variation. Or if you want, I can write it over here now. If you do some kind of perturbations of rho to compute this, you expect this to be an integral of what I'm calling the variation of f with respect to rho times phi for phi being such that rho plus epsilon phi is measured. So I need that the integral phi is 0. So you're doing just variations of f that preserves the mass of the density rho. You can compute what it is, and then you can realize that formally gives you an expression like that. And in fact, you recover what is the variation. The variation is exactly what I have written here. In this particular case, what I have written here between brackets. So that's the variation of this function. With respect to rho. OK, so that's why you have this formula over here. Now, once you have this formula over here, now it's just that you need to substitute the equation. d rho dt is minus divergence of rho gradient of the variation with respect to rho times the variation with respect to rho. You integrate by parts once and probably got the sign wrong somewhere because this is a plus. It's not a minus. So I integrate by parts now, and I get minus the integral rho. And then you have gradient of the variation of f with respect to rho. It's kind of product with gradient of the variation. So what you get is d square. OK, so the whole structure of the equation is in such a way that you have this kind of magic formula that at the end of the day, you have a derivative of f of rho. It's minus rho, the gradient of the variation square. This can be done with a general pressure law. And the relation between phi in the functional and p is written here. And yeah, this gives you the more general case when it's not a power. OK, so the main message is that for this kind of model, you have this Lyapunov functional. And then if you would like to obtain stable stationary states for this kind of PD, you would expect to have them as local means of that energy function. So what you are interested in, local minimizers of this functional. OK, so all of this just to tell you that this is the main objective of my two talks, I want to find properties on local minimizers or global minimizers if they exist. I can say they're global for functionals of that form. OK, so this problem, in fact, is quite classical in some particular cases. So what I want to know is precisely now, find the assumptions on the attractive part and the repulsive part in such a way that I have a local mean at least. And this is quite classical, as I said. And probably the first case in which this was widely discussed is for crystallization. But there, typically, the potentials are very single at zero. They are a kind of Leonard Jones potential, so they are not even locally interval. So I'm not discussing those cases. In fact, for me, I want to do it with densities, with L1 functions. So the minimal assumptions I'm going to put on you is that you is locally interval. So another very classical case is for semiconductors and in astrophysics, and also in math biology for chemotactic movement of cells where cells go up the gradient of certain chemical substance, then you end up in a problem like this for the particular case of the Newtonian potential. So you there is typically Newtonian potential either in two dimensions or in three dimensions. OK? And the classical cases are always with linear diffusion, which will corresponds to take here a roll of rho as the functional there. This is a comment I probably should have made. If you want to recover here linear diffusion, you see that m equals 1. You cannot get it so easily, but as a kind of limit. And if you do the limit as m goes to 1, you recover there the log. OK? So the function will be the integral of rho. There are other applications in which this question appears. So in mean field games, to find Cournot-Nash equilibria is one particular application of this for potential games. Also in fractional diffusion that you have heard so much during the last eight days, if you take you a potential more singular than the Newtonian up to local interability, in fact, this is nothing else than an inverse fraction of Laplacian. OK? And finally, there is also an interesting application for the eigenvalue distribution of random matrices, which is, again, a particular case here. Typically, it would be the log interaction, but in one dimension is a particular case of interest in that case. OK. So also I want to mention that this appears in also some problems in math biology. So one of the cases is what I said before, the chemotactive movement. And I want just to mention this because it's the most classical one and gives you a bit more of the modeling issues there. So just quite quickly, you will have typically an equation for the evolution of the cell density. So n is what it was wrote before, in a sense, because they have a density of cells. And you are assuming that the cells, they are just attracting each other. Why? They are attracting each other because they do it through the interaction via this chemical substance C. So you have another equation in principle for the chemical substance. This C gives you the concentration of that chemical. And then for that, you have, in principle, this kind of reaction diffusion equation where you have a source that is proportional to the number of cells. So if you do that and if you neglect the blue terms over there, you end up with solving this Laplace inverse of the density in order to get C. And if you write this in terms of the fundamental solution, you recover an equation of the previous form where the u is the Newtonian potential, in this case, in 2D. So it's a particular case, of course, is what is usually called a parabolic elliptic Keller-Siegel model. And typically, this is done with linear diffusion. But nonlinear diffusion has been also included by many people in the modeling side to model the size of the cells, to avoid overcrowding. Here you have just somebody playing around with a pipette with a chemical substance to which these cells are attracted to. And you see that's the kind of thing that you are trying to model when you have plenty. Another reason why I came to this question is also from math biology models. There are these models of swarming that they're not going to enter too much. But just to mention that it was somehow a motivation that triggered my attention. Because in those kind of models that they don't want to explain a lot, what you have is larger scale patterns that come from the movement of different animals that they're producing some coherent movement, a kind of flock or a kind of meal, like in this fish show that you have in the picture. The interesting thing is that many of these can be explained with basic mechanisms among those that action on revulsion. And some models that are just there, very basic models, they give you equations which are related to these ones with different potentials. So this to say that not only the Newtonian potential is interesting that appears for the case of the chemotactic movement, but they were using other potentials, like potentials behaving like modulus of x, either attractive or repulsive at 0, independently of the dimension, and with certain applications here. So it makes sense to look at different behaviors of the potential you are not just restrict to, say, the Newtonian case. OK, so I think it's enough for the motivation. And now let's get into a bit more mathematical details. So now the time after lunch is over. Now we get more into math details. OK, so I will concentrate on the case, on the particular case in which both the diffusion and the potential are homogeneous. So I'm going to take the particular case in which I have rho to the m as the diffusion, as I already wrote here. And let me start with discussing things about the particular case I'm going to do, which is I'm going to put also these homogeneous. I'm going to assume the u of x is modulus of x to the power k divided by k. OK, and here k in principle is going to be something between minus dimension and dimension. Dimension is d, no? Yes, OK? For me, k equals 0 means the log. And it's just a notation, as you want to take it, like this. The restrictions y, k between minus dimension and dimension I will explain them in a couple of slides. OK, let me start then, before discussing anything there, let me start by doing some computations with the f of rho. OK, I ensure that this computation will not surprise any of the experts in the room. But this is the first thing that you have to say to students that they haven't seen it. They haven't seen anything like this before. So the first thing that you have to do with this kind of energies is to look at how they change by dilations. OK? So let's take a density rho. It's just a given l1, lm density. And let's take any parameter lambda positive. Let's define the dilation rho lambda of x that preserves mass as lambda d rho of lambda x. And then let's compute f of rho lambda. Why in doing so? Because since I assume that both the diffusion and the non-local term are homogeneous, maybe I can work out the lambda in the computation. So let's see what is the dependence on lambda of f of rho lambda. So somehow I take a rho and looking at a kind of curve through rho, which is given by these dilations. And I see how the functional works there. OK, so let's do it. The substitute there, I compute. So help me with the change of variables. From the first time, I will have what? I'm going to try to do directly the change of variables. So what I have is rho of x to the power m. And then I will have a lambda to the dimension times m that was there when I put the rho lambda in changing variables. So I will do lambda x equals y. So then I will get minus d here. Do you agree with me? This is just changing variables. Y equals lambda x. I'm calling x again y. Good. Then from the second term, I have 1 over 2, double integral. Then here it's easier because essentially when I do change of variables both in x and in y, I will recover here an x over lambda, y over lambda. So I have a 1 over lambda that goes outside. So I will have lambda to the minus k. Do you agree with me? So you see, because they are homogeneous, I can get the lambdas out. And now we see something. We see that depending what's the balance between the m and the k, one term or the other will be dominant in a sense. So it's clear that if they are equal, so if I have d m minus 1 equals minus k, somehow I have the same homogeneity in both terms, that's the case I will call fair competition. In that case, what I have is that f of rolanda is, in fact, homogeneous. When I get lambda minus, well, it doesn't matter what I take out. Let's say that I take out lambda minus k. I forget about the m. I will have this property in that case. While if m is such that d times m minus 1 is larger than minus k, then this is an exercise for the students. You can check in that case that there is a global minimizer of this function as a function of lambda. It's a 1 lambda star such that you have a minimizer of this. So I mean, your function goes from 0. Well, it depends where the m is. So I don't want to enter in now different cases. It doesn't matter. There is a minimizer of the function for all cases. And you would expect somehow that, in some sense, I want to think about it that the diffusion overcomes the attraction or the possible aggregation due to attraction. So that's why I call it diffusion-dominated case. And of course, we have a third regime, no? So what dm minus 1 is less than minus k that I call it, since this is dominant in that case, I will call it, you will see why it's dominant. I will call it aggregation-dominated case. So let me discuss why these names a little bit. I know that I have some friends in the room that will not like some of the names here for some of the regimes. But let me try to convince you that they are good names. OK. So first, the diffusion-dominated regime. Well, I'm just writing in a bit different way. So dm has to be larger than d minus k over d or 1 minus k over d, as you prefer to, I prefer just to get the regime in terms of m. So in fact, for that case, in the particular case of k equals 0, you see k equals 0 would mean that I'm taking here the log. So for k equals 0 with the log here, if m simplifies to m larger than 1. And in fact, already that case, Calvea and myself, we studied this some years ago. And also, independently, Suhiyama did some works also in that case. And we were able to show at that time is that no matter what the initial data is, you have always L infinity bounds on the PDE, on the solution of the PDE, which are uniform in time on top. OK. So this will probably surprise some people here that know about the Keller-Siegel, but I will explain you what's the difference later on. Well, the standard Keller-Siegel, in fact, let's make this comment somewhere, probably here. For the classical Keller-Siegel, this corresponds to, say, dimension 2k equals 0 and m equals 1. OK, you just check with those numbers. It's exactly one of the cases of the fair competition, OK? Which is what I'm going to discuss also here. In the case of the fair competition, in fact, what you can, I will convince you that here you have in general a very similar situation as the classical Keller-Siegel. So for the non-experts in the room, let me explain to you what happened for the classical Keller-Siegel, what it was known there. In fact, it was known that here there is something that is called the critical parameter. And now for doing that, I'm going to put a parameter here. So I'm going to change just the functional. And instead of having one, I'm going to put here chi. Because I want to fix the mass of the density. I'm going to have unit mass. The L1 norm of rho is 1 in dealing with probability densities. So then I'm going to put a parameter chi in front of the attraction. Then in this classical Keller-Siegel, it was known that chi equals 8 pi is the critical parameter. In the sense that if chi is larger or less than this critical parameter, different things happen in the fair competition in this classical Keller-Siegel case. If chi is less than 8 pi, it was known that there is global assistance of solutions. While if chi is larger than 8 pi, the genetic thing is that there is blow up in finite time. For the critical, there are plenty of things known by now. But let me just say that there is global assistance of solutions. And they blow up, but in infinite time. Poor sand initial data. It's more complicated than this. But let me just say these three things today. In fact, also in the critical case, you have infinitely many stationary states. Sorry, it's a bit small probably. You will have it very soon in the slides. So you have infinitely many stationary states on some of them. And they have some basin of attraction too, apart from the ones that blow up in infinite time. So the situation is very complicated for the critical. The important thing is about this dichotomy. This dichotomy in terms of this critical parameter, the parameter is small. So if the interaction, the attraction is small, then you have global assistance. But if the attraction is large, then you have blow up in finite time. And if the attraction really gives you some particular, for very particular case, then you have exact compensation and you have stationary states. OK, so I will convince you that in fact, and I already advanced some of the results, this is exactly what happens for the whole fair competition case. Every time that you have this relation, as soon as k is negative at least. And finally, let me mention that you have also this aggregation-dominated case. And there, again, some particular cases were known. Essentially, these two works that I mentioned in there are for, again, the Newtonian case, taking k equals 0, or k equals Newtonian in several dimensions. And then they were able to show that independently of the parameter that you want, independently of chi, you may have, for any value of this chi, you may have either blow up for certain initial data or global assistance, meaning that there is no critical chi. Both things can coexist for any chi. OK? So I like to call it aggregation-dominated regime because it's not telling you that everything aggregates, but it's telling you that the diffusion is not strong enough to avoid aggregation. OK? So don't think about aggregation-dominated as always giving you blow up. No. Just telling you that for any value of the parameter chi, you have always initial data for which you blow up in finite time. But this can coexist with other initial data which are spreading off, and this is globally. OK? Again, what the two works I mentioned in here are particular cases. So now what I'm going to show you in the last 10 minutes of this first part is a bit on the fair competition case. Let me see if I can finish that. And then I will concentrate on the diffusion-dominated case. OK, so let me see. There are probably two computations I can do even before starting with the fair competition case, which are interesting. I can do on the board to even clarify a bit more these different regimes. I will rewrite again the PDE here. So let me write the PDE here, d rho dt divergence. I mean, the constants are not that important at the end of the day. So rho gradient rho to the n minus 1. In fact, for this one, it will be n over n minus 1 here. That's rho gradient of modulus of x to the k divided by k comb of rho. OK, good. So let me do a computation here. So remember that this is also Laplace rho to the n plus divergence of rho gradient modulus of x to the k divided by k comb of rho. So also that you understand a bit what happens with the aggregation-dominated case. So let's compute formally that the constant disappears here. It gives you the Laplace rho to the n. In fact, I don't have the constants there, but follow this one. This one is correct. Good. So let's compute the evolution of the second moment here for this. So formally, I want to see this. I mean, I put the derivative respect to d inside if I can do this. And now let's iterate by parts, but let's write it directly from there. The Laplacian of the first term goes directly on the x square over 2 and gives me the dimension d times the integral of rho to the n. So that's the first term. Second term gives me minus the integral of x, because the gradient of x square over 2 gives me x. And then I iterate by parts already. And I have here rho. And then I have the gradient. So this is dotted with the gradient of ux minus y. So let's put what it is. Now let's work just a little bit here. This is what? This is, in fact, dot x dot x minus y to the x minus y to the k minus 1. Minus 2. Yes? OK. Rho x rho y. I'm just computing the gradient of modulus of x to the k, which gives me x modulus of x minus y to the k minus 2. Now I keep this the same. Now what I'm going to do here is to symmetrize this. So I change x by y here. OK. Since this is symmetric, this is symmetric. Here I will get y minus x, and here I get y. So I'm going to change 1 half of this by 1 half of symmetrizing. So you can convince yourself that what you get is minus 1 half, the double integral of x minus y dotted with x minus y, modulus of x minus y to the k minus 2, rho x rho y. You agree with me? So then here you get x minus y, x minus y squared. So you can add it to this one. And finally what you get is the integral of rho to the m minus 1 half, double integral of x minus y to the k, rho x rho y. And you see that you get the two terms of the free energy. And now if you want, I'm going to rewrite this. And believe me, if you do the algebra, this is d times m minus 1, 1 over m minus 1, the integral of rho to the m plus yes, here, OK. So here you will get, let me write the end of the computation, plus m minus 1 over 2, the double integral of modulus of x minus y to the k, rho x rho y. I think there should be a d. So in fact, you can see from here somehow, again, the exponent appearing. Because you choose exactly this relation dm minus 1 equals to minus k, OK? So what you can see is that this gives you d times m minus 1 f of rho. So this computation to tell you that, again, in this particular case, in the case of the first competition, if you take the evolution of the second moment, you recover the free energy times a constant, OK? So we will see what are the implications of this. So let's leave this computation right now there. And let's see what is the implication of this. So let's concentrate, then, on the first competition case, where dm minus 1 equals minus k. Let me tell you what are the results that you can prove there. So the first thing is that, of course, you have this property under dilations, as I already said here. So if you have a stationary states, they will corresponds, or global minimizers, it doesn't matter. They will have zero energy, and they will correspond to the value of the optimizers of the zero energy for f. I will explain you from where they come later. So when do you have them, really? But a priori, from here, you just see that your minimizers will have zero energy. This is exactly what I mentioned to you that was known for the case of the classical Keller-Siegel. Plenty of people involved in part of this dichotomy. I hope not to have missed an important person. If I missed, I'm sorry, it should be included. So you have this dichotomy I mentioned. And now let me just point out that when k is negative, in the range, remember that k is always between minus dimension and dimension. So if k is negative, it's between minus dimension and zero, then just because of the relation that I have here, m is between 1 and 2. So your diffusion is nonlinear in the sense of porous medium case. While if k is positive, it's between zero and dimension, here you get diffusion, which is between zero and one. This is the reason if you want for the bound from above of being dimension, not to have an m which is less than zero. The bound from below on the minus dimension is precisely in this range to have here something which is a local interval. So should I stop now at the right time to stop? So let me stop here, and then I will continue discussing the fur competition case. Thanks.