 I like how that's all I have to say. You're fine? Yeah. I will be American with you. All right. So welcome back. You're all for a win. JSS. And this will be the last JSS this summer. Oh, yeah? It's kind of good to be doing, you know. Next weekend. Anyway, today I'm happy to have Giovanni to talk about some analysis. I guess. Is that a good problem? Yeah. So as I mentioned in the abstract, I have a really problem I want to talk today, but that is more an excuse to talk about other things. So I'm going to write down the problems and to state the theorem that I want to prove. But then we're just going to forget about the problem, talk about interesting things that can be used elsewhere. And at the end, we'll probably be able to talk about the problem a little bit more. So the P I want to solve is the following one. So with that, since I'm only interested in very, like in problems that are interesting from the physical point of view, I'm going to assume that n is either 1, 2, or 3. And omega is an open bounded, the boundary to be sufficiently nice. So that's a small problem. Anything that allows you to solve an elliptic equation is fine. So elliptic is fine. Then we fix a nothing. We have a parameter alpha in 0, 1. The problem I want to solve is 1 minus alpha of ut. So u is a function of two variables, s and t. s is the space variable at this time. So ut is the derivative with respect to time. So 1 minus alpha derivative with respect to time equals to the last one, the function of w. This is just the trace of the s. And alpha ut. What's w? W is a function. The last one is the trace of the x. That's what it is. The question of u plus f of u plus w. And I want these two equations to be satisfied for x in omega and t plus omega. What is f of u? f is just the function of u. And I'm going to give an explicit formula for 100 seconds. Then we have boundary conditions u equal to w equal to 0. And I want this one to be satisfied of boundary conditions. So for f's in the boundary of omega and t plus omega. And we also have an initial condition. Time 0, u of x is equal to u 0 of x. So the unknowns are u, the unknowns are u and w. And the thing of the problem is the seven omega. The initial condition u 0 and the fourth f. So what I'm going to assume on u 0 and f is that the first assumption is on f. f has to be a polynomial u. So f of u is equal to the sum of j from 0 to n minus 1 of v j u to the power j. And I want v to n minus 1 to be negative. And I'm also going to require that v to n is equal to 2 when the dimension is equal to 3. These assumptions over here, they're not very important from the analytic point of view. They're because of the physical meaning of the problem. What is that? Power of u or j of v? It's a polynomial. As you might have noticed, the reason why I want the last one to be negative is that, for example, take n equal to 3. A polynomial that satisfies this is x minus x to the power 3. Which is the derivative of the double-web product. This is somewhat the typical expression for, well, I should say u. This is somewhat the typical expression a little bit. You have to keep in mind for u. Okay. And the next thing is a conditional initial data, u 0. I want to u 0 to be h 1 0 of m. This is the space of square integrable functions that have a distribution of the derivative u prime, which is in 2. And u is equal to 0 on the boundary of a linear sum. The theorem that I want to prove is a well-posed next theorem. So I want to prove that under these two assumptions, I can guarantee the existence and the uniqueness of a solution to the problem. So if... What's the reason for, if I add those two first equations together and I have an equation in the UT, one equation, right? Why do I want two equations? That's exactly what I'm going to talk about in a second. I have a solution for p. I'm going to mention that what classical means shouldn't be p, or something I'm going to define at some point. So yeah, it's almost not using the double-deceased term, right? But it's worth spending some time to look at this problem. And you can think of alpha as interpolating something. It's interpolating the equations like alpha equals u and alpha equal 1. When alpha is equal to 0, then you can solve for w in the second one, minus a classical u, minus... And you can put this thing into the first one and get 1 minus alpha, alpha equals 0. It's equal to the left equation, minus the classical calendar equation. Put to 1, this first term disappears, so we get that the left constant of w is... So then we also have that w is 0 on the boundary of omega. And so by uniqueness of solutions to the left constant equation, we get that w... And if w is constantly equal to 0, the only equation we are left with is... Now, alpha is equal to 1, so it's semi-linear equation. So this c-system is inter-living between a second-quarter semi-linear equation and a fourth-quarter semi-linear equation, right? It's fourth-quarter because I have the composition of the robot, so I have four spatial derivatives. This is another reason why this problem was considered in the first place. This problem is very well understood while this one is more complicated. So there might be hope to get something about this problem by taking limit over alpha that goes to 0 of things that you can say about this problem. That's a very big idea. This notion that you have of interpolating is very vague, right? It is very vague. What does it mean? What is supposed to represent this interpolation? It does not represent anything. It's a c-system that depends on how far. When alpha is 0, you get an equation that is well known. When alpha is 1, it's not an equation. You can consider this. Is there any reason why you would not consider this instead? I would only consider any system if there was a reason. As I said, there is a reason. It has a physical meaning. You can derive this one with a link from the field equations, which I don't really know they are, but it comes up in places. As I said, it's not the GIM of my class. I have a problem that I can solve with the techniques I want to use. We have a PDE, and we want to regret the PDE as an ODE, but in a balanced space. To give another motivation about the tools I'm going to introduce, let's first consider the case in which the balanced space is a GIM space RM, and we're given a matrix N by N matrix A. We all know that if you look at the differential equation, u' of t equals 2A, and u of t, the solution to this problem is... The game we want to play is to have a way of defining something that, in a sense, behaves like the exponential of a matrix or an overrader in our case. The way you define the exponential of a matrix is already a good way of defining this object for bounded overraders. The way you define this thing for matrices is to define e to dA using the power series, right? So the sum of k times 0 to infinity of tA. And it's clear that we can extend this definition to linear bounded overraders in the following way. So if we have my generic balanced space X, and we have u' of t equals 2A, and u of 0 equals x, with a initial word, and we can define this overrader a e to the dA. And if you apply this overrader to the initial data u0, then you can show that this actually solves the problem. And it's interesting to notice that as soon as you can make sense of something like this, you can actually solve much more complicated problems. For instance, since we're interested in estimating your problems, we can also consider the case in which we have f of s u of s. And in this case, the solution will be given by the variation of constant method, right? We can define u of t. What's the problem in the case that a is a linear bounded overrader? But of course, the interesting case is, as you will know, the complicated one. So we want to solve p e, right? So our overrader a is going to be some differential overrader. And differential overraders are known to be unbalanced, right? So we have to be able to say something about the case in which a is unbalanced. So I hope that this might be the definitions that I'm going to give. I should say that in the following, we're always going to assume that x is a contract spawned space, which is the domain of your overrader. We assume that a is closed. That was about to give you the following one. So a family, a semi-group from x to s, and the bounded, since one is veracity. And we want t of 0 to be the identity map on x. So we want t of 0 of u equal to u for every ruin at z. We want t to satisfy a semi-group though. So something of the form t of t plus s equals to t of t times t s. We want this one for every t and s positive. We also would like to have the map that takes t into t of t of x to be continuous. So a family of overraders that satisfies this probability is called the same group. And this is the object that is going to play the same role of the exponential of the matrix. So the question might be, okay, when I have the exponential of a matrix, I also have the matrix I started with, right? So is there anything that looks like this matrix A that is related to that family of a semi-group? And the answer is yes. It's what is called the infinitesimal generator of a semi-group. So definition. I should just define this definition like this. So if you have e to the t a, how do you get a? You take a derivative and you evaluate it 0, right? So we want to mimic the same thing. The way we do this for an unbounded overrader is we define the domain of A first. We define this to be a set of all x in x for which the limit as t goes to 0 on the right of t of t x. Once we have the set of points for which we can take this limit, we're going to define A of x to be, it's finally here, this corrects. So exactly the same thing we've been doing in this case, we're here, but you have to be a little bit careful because you have to, like, there is no guarantee that this limit is going to exist for every other sentence. Okay, and we could say that A is called, I should mention that usually in applications you have the PDE, which you would write as node e, and ideally you have A in the map t of t. So what, like, usually, like, what you want to do is to, like, give an A, you want to be able to find the overrader, the same group that is generated by A. And there is, this is not always possible. It's not true that even any overrader A, there is a, that A is a generator for the same group. So we have to restrict our attention to, like, the smaller class of overraders for which we can say something more. This leads to the following definition. We say that minus zero resolved in a way, so it's the, this is the subset of the complex plane of the lambda for which lambda i, lambda n to the minus a is invertible. And this object here is a sector in the complex plane, and this is the opening angle of the sector. So for instance, s, s, let's say s omega would look like, sorry, we have omega here and omega here, and it's this set here. In this case, we have delta is positive, and we're adding pi over 2. So it's going to look like this. So this is delta plus pi over 2. And this whole set is contained into the resolved overrader, in the resolved overrader. So this is a way of saying that the spectrum of A is somewhat confined to this sector here. And the spectrum of A is, like, it's the set of points from which this is not invertible essentially. So it's like saying the bad set is confined to something that we have control. And what plays a really important role is the space that we have there. And this is going to be clear from the next definition, but you play out of this extra space you get here. Oh, I should also say that we need an estimate on the resolved overrader. So for every epsilon positive, this is still part of the definition. There exists a constant and an epsilon such as the inverse of the epsilon over the absolute value of lambda, the length of lambda. And this is true for every lambda different from 0. S delta plus pi. So you have a slightly smaller set. So you want to be an epsilon part from the other set. Finish. If A begins to define the number T of T, we want it to be, okay, the identity when T is equal to 0, the 1 over her gamma of E to the lambda R to the lambda. And this is for all T in S delta. The reason for this is that if you look at this picture, so this is delta, then S delta is this one. So any T in S delta has positive real R, right? And what I want from gamma is to do something like this. I want gamma to go to infinity, like in this region here, go back here and then go to infinity over there. In this way I have that the real part of T is positive, but the real part of lambda is negative. So that the real part of this number is going to be negative. And I get the convergence of this. Okay. And so you come up with this amount. And what you say about this is that you're going to build a few properties. So what you're going to build is Z.