 So let us try to make the exercise that I gave you yesterday. So we have u du over dx1 plus du over dx2 equal to 1. This was the exercise of yesterday. And our notations were, let me write it here, sum 1 to n vi of x du over dxI equal to f vi of x and u. We are in the quasi-linear case. Sorry. So vi of x u of x u over dxI is equal to f of x u of x. And u equal u bar on sigma. This is in omega. This was our notation. We are in Rn, x in omega, in Rn. OK, so the exercise is this. Solve this coupled with the initial condition equal u bar. Now, so here n is equal to 2. Sigma was this segment, x1, x2. So this is sigma. And u bar was u bar. So now, u bar of sigma was 1 half sigma for any sigma. So capital sigma was parametrized by a map called phi, say, from 0, 1 into R2, taking sigma into sigma sigma. So this is a parametrization, regular parametrization of this segment. And we have already checked that the transversality condition was satisfied. So we can start our characteristic system that we have, I think, already written yesterday. So it is x1 dot, where dot means member d over ds. So let me write down here the characteristic system. x dot equal to v of xy, y dot equal to f of xy, x of 0, is equal to x bar, say, phi of sigma. And y of 0 is equal to u bar of phi of sigma. So this is the system of characteristics. And these are now coupled together. And they give a solution, x, depending on what? Depending on s and sigma, OK? Actually, as I said, one could also take the dependence on the initial time, which is now 0, so I don't write it to the dependence. So it is just x depending on s and sigma and y depending on s and sigma. These are the local solutions. And the solution u of x is locally equal to y of s of x sigma of x, because we have the change of variables that we have discussed in the last lectures. So now, who is the capital? So f of x and y. So I have said n is equal to 2. f of x and y is equal to 1 now. And b of x and y is equal to b1 of x and y, b2 of x and y. And what is it? It is y. y, the components for the first derivative is just y and then 1. Because we are adding a new variable, y, in this space. And looking on the graph, y is equal to u of x. So this actually is independent of x. On the vector x depends only on the vertical coordinate, y. OK, so this says that x2 dot is equal to 1. y dot is equal also to 1. And x of 0, x1 of 0 is equal to phi of sigma, so sigma, phi of sigma is sigma. Sigma sigma, so x1 of 0 is sigma, x2 of 0 is sigma. And y of 0 is what? y of 0 is u bar of phi of sigma. So phi of sigma is sigma sigma. And u bar of a point here, identified with the sigma, is 1 half of sigma. So actually, u bar, maybe we should write u bar of phi of sigma to be extremely precise. Because u bar, we think of u bar as defined here in the physical space. So u bar at the point here is exactly the value of u bar. So this point here is an image of an embedding phi. So u bar evaluated at phi of sigma is 1 half of sigma. This is the, OK? So I think that we also, we have already found the solutions to this yesterday. So let me copy the solution for the solution that we found yesterday, solving explicitly now this system, which is very easy now. So it is x1 of s and sigma is s plus 1 half sigma. No, sorry, s squared plus 1 half sigma s plus sigma. This is x1, OK? x2 of s sigma that we found yesterday is s plus sigma, and y of s y of s and sigma is s plus 1 of sigma, OK? So this is what we found yesterday, simply solving that. And then we have the problem of the change of variable. So what is the change of variable? So the change of variable is the following. We have the space of parameters. Namely, we have the interval 0, 1 that we call maybe u. And then we have this delta, and this is minus delta. So this is 2 delta. And then we have the map d going into a neighborhood of sigma. So and this is the following map. Given s here, and sorry, given s here and sigma here, this gives us the solution x of s and sigma. And this can be inverted, taking a point x in n into ns of x and sigma of x. So what we are using here in the definition, in the expression of the solution, is exactly the inverse of this, is exactly this map. So at any point x, the solution here is exactly the value of y evaluated at this sigma of x and s of x here, at some point here. Going here. OK, so this is the situation. So we have to invert the system, essentially. So we have to invert it, and so we want to. So given x, we want to find s and sigma. So given x, given x1, we want to find sigma and s solving this. So the point is given x1 and x2, find s and sigma. This is the point. So maybe we can subtract. Maybe we can write x2 minus x1. OK, this is a system of one equation is nonlinear. But let us try to do that. So we have x1 minus x2, for instance. x1 minus x2 is equal to what? It is equal to s squared over 2 plus 1 half sigma s minus s. OK, and then we see that we can say do this. So s plus sigma minus 1 minus 2. It is s plus sigma minus 2. And therefore, since s plus sigma is x2, this is s over 2 times x2 minus 2. And this gives already s of x. Because we can therefore say that s is equal to s of x, namely s of x1 and x2, which is equal to 2 times x2 minus x1 divided by 2. This is s. OK, and therefore, sigma from the second equation, sigma, can be found from here. So it is x2 minus s, which is precisely x2 minus this. OK, so x2 minus 2 x2 minus x1 divided by 2 minus x2. And so we have inverted the map. And then since y, I have erased what is y. No, it is there. So our solution actually is s plus 1 half of sigma. So it is s of x plus 1 half sigma of x. And therefore, it is this plus 1 half of this. It is 2 x2 minus x1 divided by 2 minus x2 plus 1 half times x2 minus 2. This is, if there are no mistakes, this is the solution of our PDE, of our Cauchy problem, if there are no mistakes. So this is our solution. Maybe I can give you a homework. So let me erase now everything. There is a homework that you can so forth. We have, for the moment, studied with some detail the problem. So for the moment philosophically, what we have done is the following. We have to solve a problem of this sort. So let me now shift to the old notation Tnx just to, so let me shift to the more familiar notation, say, like this, OK? So v of Tx, say, times grad u equal to tx, v of Txu, Txu, xu times grad u, xu. This is the old notation. It is more familiar. So now, so x is in our n, small n, t is in r, say. And so with the previous notation, we have this, OK? And this is the space gradient, only with respect to x. This is t, x, and u. OK, so philosophically, and then we have assigned a condition, so here, say, in omega. And this, we assign a condition at time 0, just on c to 0. And so this is the more familiar notation, instead of using x as before. So the philosophy was, how to solve this? Well, we solve a system of oddies, ordinary differential equations. And maybe it's slightly implicit, but we can find u of Tx via the solution of the ordinary differential equations in the system. This is the idea. This is OK. Everything is OK, provided that this is non-characteristic. It's OK in this form. And also, that f is smooth enough, bi is smooth enough, sigma is a hyperplane. So circle is smooth enough. Now, and the system of characteristic was a system of oddies with a given initial condition. Now, the exercise homework could be the following. So homework, let me introduce again a slightly more involved notation. So let capital X of S Tx be the solution of our characteristic of the first part, if you want, of the first block of the characteristic system, b of x. And assume that now, for the homework, we assume for simplicity that we have in the linear case. So b does not depend on u. It's the linear case. And f is 0. So hypothesis, assume that b does not depend on u. Therefore, this is linear equation, and f is 0. So this is really linear equation. And assume everything is smooth. So now, call this would be. So assume also, if you want, also independent of time, if you want. So really, really just this. Now, n dot is usually d over ds. And then this is slightly different. This is on T, capital T times n. And also assume n equal to 1. So n, small n is equal to 1. So we have just time and space, just one dimension space, and time is one dimension. So small n is equal to 1, b equal b of x, f equal to 0. And then now we want to solve, however. And now, however, our initial condition is a given time, t, instead of 0. OK? OK. So now, our solution, this is the starting time, t. S is therefore larger than t. So capital T, say, is given. And now, I look at future points. But the initial time is now t, it's not 0. So let me denote the solution, maybe, by x of s given t and x. So x is given. T is given, initial conditions. S is the variable of the function, one dimensional variable. And the solution depends, of course, on s. This we know, s of x we know. But now, we have a new, as I told you, a new symbol. Because I want to also use maybe the initial time, which is not 0 now, it's just n times t. OK, and this is the usual stuff, just a new symbol. So the homework, say, then the function u of tx equal to u bar of capital X of capital T, tx, solves, solves. Well, what do you believe that this, what do you think about this? What do you expect? What is this? Well, we have the following picture. This is time, remember, time is the first variable. And for convenience, it is always vertical in our pictures. So time is here, x is here, capital T is here, this is time, capital T, this is t, this is, OK. So now, what is this inside the u bar? Well, I'm solving the system of characteristics, starting from this time here. And I go in the future up to capital T. So I put capital T here. So I evaluate the trajectory at the final time. So I'm starting here. And then I have my trajectory starting at time t. And I go up to capital T. So now, what do you expect about this u? In some sense, now I'm really changing the time, in some sense. Because last time, we have the following situation. If I start here at time 0, then my solution u will be u bar here in this slice, in this slice. But now I'm going in, so this was, I'm going back in some sense. Now I'm going forward. So what do you think? Well, the exercise consists in proving that this solves the usual pd ut plus b of x ux equal to 0. And what? The point is the condition. U capital Tx equal to u bar? U equal to u bar. So this is interesting. This is a pd, not with the initial condition, but with the final condition. It is a sort of different kind with respect to what we have said up to now. So I'm pretending to solve the pd. Given the condition, you bar at the end of the time. And I'm solving below here, given the value here. So it's different with respect to what we have done. So the exercise consists in proving this. It's not easy. It's not so easy. It doesn't matter. Now, so this is home. Then I think that I left you also another exercise yesterday is to prove a claim. So let me check the claim now. So the claim was, I think, that in the general form of the old notation, previous notation, was to check that actually u of x equal y of s of x sigma of x solves the pd. This is the claim of yesterday. So first of all, 1, if x is on sigma, then s of x is equal to 0, then u of x is equal to y of 0 sigma of x, which is equal by construction by 2u bar phi of sigma of x. So this is simply to say that the function u defined as follows satisfies the initial condition. OK, next two. So let us compute our pd, which is sum from i to 1 to n. d i of x du over dx i. So what is this? So we have to compute sum 1 to n. And then we have, so if I differentiate this with respect to x i, so I have dy over ds evaluated at s of x sigma of x times ds over dx i plus sum from 1 to n d i of x dy sum dy over ds sigma k. y is evaluated at s of x sigma x. And of course, s is evaluated at x. So y evaluated at s of x sigma of x ds sigma x over d i dx i. OK. So now this is independent of the index. OK. So I can put it outside dy over ds, OK? Sum from 1 to d i of x ds over dx i. And then I can write it dy over ds sigma k 1 n d i of x ds sigma k over dx i. OK. Fine. OK. Simply, these are finite sums. So I can exchange the order of summation. And I have this, OK? Now, this is equal to x dot j by our system of characteristics. So this is equal to dy over ds, the sum x dot i ds i plus the sum minus 1 to y ds sigma k. And then this is equal to i from n to n x dot i ds sigma k over dx i. Remember that our system of characteristics says that x dot of i is equal to d i of x. OK. Now I claim that this is equal to 1. And this is equal to maybe, let us check first this, OK? If this is equal to 1, this is y dot. And therefore, this is f. So if this is equal to 1, this is f. And if this is equal to 0, then we are done. So let me replace this is f, OK? So this is f, because this is y dot. This is y dot of f of x and y. So I repeat, if I am able to prove that this is 1, then this implies that this object is equal to f plus something. If this something is 0, then u solves the PDE. So let us consider the following fact. We have that s is a function of x, which by the way is a function of s and sigma. And this is equal to s for any s in minus delta, delta. Therefore, I can differentiate this with respect to s, which gives me sum from i to n ds over dx i at x sigma, evaluated at xs sigma, times x dot i. So differentiating with respect to s, OK? Hence, this is equal to 1. OK, so this is for the moment equal to f plus the second sum, k from 1 to n minus 1 dy over d sigma k, sum from i, 1 to n, x dot i ds sigma k. Now, again, we can look at the following equality. Because remember that this is, I mean, we are composing a map with its inverse. So at the end, when we make the composition of the map with its inverse, we get identity. So say sigma for any k from 1 to n minus 1, I have also this. For any k from 1 to n minus 1. And now I differentiate it with respect to s. And I get sum sigma k. So this gives the claim and what we were looking for. So this, I would say. So up to now, we have learned something about the method of characteristics. And for the moment, we have confined ourselves to linear and quasi-linear first order PDs. So for the moment, we have not studied. So we have included in our situation, for instance, this important PD. So this is included. This we know, at least for short time, u equal u bar in one space dimension. So say in 0 t times 0. So this we know. I mean, we know. In principle, we are able to study this because it is quasi-linear. So the nonlinearity is not in the higher derivative. But what we do not know for the moment, this has not been done, is, for instance, this kind of PD. So you see, this is, for instance, just in space, if you want, or also u t minus grad u squared equal to f. So keep in mind that we were able to study this. This is part of a general theory that we can write like this, where now in one space dimension, f of tau is say 1 of tau squared. For this choice of f, we have this equation. More generally, we have this object. So are the symbols OK for you? So this is just, if everything is moot, of course, this is equal to f prime of u x, if everything is moot. So it is clear that this contains this. And this has a name. This is called Borgers equation, as I already said. And this, more generally, is called the conservation law. Conservation law. It is a prototype of first order scalar hyperbolic equation, so-called conservation law. So what tau is any variable just to identify capital F in this special case? You can choose any variable you want. But we cannot maybe use x and t. So use any variable you want here, f of f. So what I'm saying is this. Our theory includes this for short times as moot initial conditions, such an equation, which is part of a more general kind of equations, which are included by our theory for the moment. Because this is sort of, if you write this like this, it is really a quasi-linear equation. So this we can do for short times. What we cannot do for the moment is this. Why? Because this is non-linear. It's OK. It's non-linear. We have treated non-linear equation. But it's non-linear in the higher derivative. It's non-linear in u x. This was non-linear, but not in u x alone. It was non-linear because there was this problem. So this is a different kind of equation. Very important. Also this. And this is called Hamilton Jacobi equation. Typical Hamilton Jacobi. And this is, say, fully non-linear. So-called non-linear. Fully non-linear first-order partial differential equation. So something we are able to do with the method of characteristic and something we are not able to do. So for the moment, let us leave this open. We will see if we will have time to move through the study of this equation, which requires a system of characteristics which is more complicated than the previous one. So I mean the transition from this system of characteristic to this is not so easy. And therefore, for the moment, we confine ourselves to this quasi-linear first-order. Fine. So these are important conservation law, which is this special character of this. Of course, it's first-order. But you see it is in divergence form. We have already discussed this point. The x is outside. It's not inside. So this is a divergence of something equal to 0. And when you have a vector field such that its divergence is 0 in the time space, this means that some quantity is conserved. This is divergence of a vector field, u f of u, in time space. Remember that this is not true anymore, even in the linear case, as we said. So when you have the linear case, ut plus b dot grad u, then this is not immediately in divergence form. So it is true that this is more difficult in some sense, because this is nonlinear. However, fortunately, at least, it is in divergence form. So now maybe I can leave you another homework. Homework, try to solve using also the previous exercise was a Berber's equation with different notation. Now I write it more conveniently. So at home, you should try to study to solve with the method of characteristic equal u bar of x at 0, u 0 equal u bar. So with this notation, this is standard common notation. What does it mean, u of 0? Well, u of 0 means that this is a function of x. It's simply this. Standard notation is semi-group theorem. So u of 0 of x is equal of u 0 times. More generally, u of t of x is equal to u of t x. This is in several places is a convenient notation, because in some sense, you are thinking about this given t as a function of x. So you are thinking about u as a curve taking t into u of t. So t going into u of t. And what is this? This is a curve starting from one-dimensional parameter space into a functional space, because this is a function of x. So at any time, I have a curve, but the curve is inside the big space of infinite dimension. So I have a curve in infinite dimensions. And so when I use this notation, this is convenient, because somehow you're not looking as a function of the product t x, two variables. But instead, you are thinking maybe more frequently to a curve in the infinite dimensional space of functions of x only. So this is a different viewpoint. So this means this, and where u bar of x is equal to 1 divided by 1. So now I would like to say something. So this is home, again, using the method of characteristics for sufficiently short times. Here there is an interesting point. It happens that if you write down the characteristics, it happens that at some moment, well, the characteristics maybe are defined for large times, but it may happen, in this case, happens that two characteristics have an intersection point. This we have already discussed very shortly in this point. This is an interesting point, because it happens that if in the trajectories of your system at some moment have an intersection, this means that certainly our method doesn't work after this time. So our solution is defined for short times here, but certainly not at this time and after this time. The idea is that we are not able to choose the value of u, because if we go back in this direction, we have a value of u bar here. But if we go back in this direction, we have another value of u bar here, which is the value of u we don't know. So this is a so-called formation of singularity. But at least, so try to understand this point, but at least for short times, our method works, because the initial condition is very smooth. So this is an initial condition like this, and it's OK, you see one. So even starting from a C1 initial condition and having such a kind of PD, it may happen that at some moment, we don't know how to define the solution. So now I would like to say something in the direction of uniqueness of solutions. The solution that we have constructed, we have already said that actually, necessarily, the solution, local solution that we have constructed is the solution of our problem. It's not only a solution, but it's the solution. Now I would like to say something more precisely on this. So I would like to maybe not, I will sketch a proof. Maybe not rigorously, completely rigorously proof, but I will sketch a proof of the following results of theorem. So let u v say in C1 in time space. So just for simplicity, I'm working in one space dimension, and I take this for simplicity and assume that u and v are C1 and assume that ut plus, sorry, is change notation. I'm more used to use this notation. So now this small f has nothing to do with the right-hand side of our previous PD, is f of u. Is what I wrote before was capital f. So please let me use small f here, and do not make confusion between this f and the right-hand side f of tx, u that we have written before. Because this is a common way to write conservation laws, usually. And assume this in plus infinity times r. And assume also that u and v are bounded. So let us call maybe l. So 2C1 and bounded, one is a sub-solution, and the other one is a sub-solution. And assume that u of 0 is less than f is C1. And then the conclusion, which I will sketch the proof, not completely proof, but I will try to sketch at least a proof. Then what do you expect here, the conclusion to be? So I start with one function below the other one. Maybe they also have some, maybe the graphs of u of 0 and v of 0 maybe also touch somewhere. And then however, this is a smooth sub-solution. And this is a smooth super-solution. Start in one below the other. Well, maybe it is reasonable to expect that for n, t. So now this is not an easy, but do you see that if now we have possibly two solutions of the same conservation below with the same initial conditions, then from this, they should be equal. Right? It is clear because if I have u and v, which are two solutions, so you have a quality here and a quality here. And then you know that initially they are equal. So they solve the same PDE, the same Cauchy problem, because you have a quality here, a quality here, and the same initial condition. Then they must be equal because it is immediate. Because if this is an equality in particular, it is a sub-solution. And this is a sub-solution. So if this is true, then one is below the other. Now we can reverse. Also, u is a super-solution because it's a solution. And v is a sub-solution because it is a solution. So they are equal. And so we have also the other inequality. And therefore, they must be equal. Is it clear? Is it clear? No, it's not clear. So consequence, if u and v in c1 bounded, c1 and bounded, time space. Do you know this symbol, an infinity, bounded? If u and v are smooth and bounded solutions, you bar smooth enough, then it's going to be. This is the consequence. Why it is so, indeed, from this. So let me be more precise. Let us write maybe w. Because u and v are occupied as symbols. So let me call wt and w, and w, OK? So indeed, u is a sub-solution, v is a super-solution, and u of 0 is less than or equal, u of 0, which is equal to bar, is less than or equal than v of 0, which is equal to u bar. Therefore, u is a sub-solution, v is a super-solution. We have this inequality at time 0. And therefore, at least, u is less than or equal than v by this. And conversely, now I can exchange the role of v and u. I'm exchanging the role of v and u. So now v is a sub-solution, u is a sub-solution, v is 0, is equal to u bar, less than or equal to u of 0, is equal to u bar. And therefore, this is the consequence of. So this is comparison principle. OK, it is comparison principle between sub-solution and super-solution, which has a consequence of uniqueness. There are hypotheses here, of course, in particular, and the smoothness assumptions, boundedness assumptions. And OK, so now let us sketch, try to sketch at least. So let us consider the function w of t, which is by definition u of t minus v of t. This is the definition. So what we know, this is c1 bounded also. And what we know, now we try to find an equation satisfied by w. Of course, these are nonlinear. So it's not clear. I mean, if I take the difference of the idea, would be take the difference of these two equations, these two left hand sides. But this is nonlinear. So it's not so easy to arrange things so that w satisfies something. And also we know that initially, at time 0, by this assumption, w is non-positive, OK? So the thesis of the theorem is to show that w is not positive. So this is our thesis. And now we take the difference of these two. So ut minus vt plus f of ux minus f of vx is less than or equal than 0, OK? Because this is less than or equal than 0. And then this minus this is still less than or equal than 0, OK? So now this is, of course, wt plus f of ux minus f of vx less than or equal than 0. Now let us write, say, f of y minus f of eta. So you can write this in using, oh, thank you. Sorry, I said this is very important. Thanks, sorry. Otherwise, this would be an Hamilton-Japobi equation. So thank you. So now we have to express this defense, f of u minus f of y. So let me rewrite this as follows, w of t plus f of u minus f of y, x less than or equal than 0, OK? And now we want, in some way, to put w here. Because w here is not present anymore. And so we write this. And what do we do? Say, introduce for any s equal for any s in 0, 1. Introduce phi of s equal to f of s eta plus 1 minus s y. Plus 1 minus s y, for instance. So that, so you think of eta and y now as fixed. And s is moving. And so now what is f of eta, f of y, sorry, minus f of eta? It is phi phi of 0 minus phi of 1. So let's do this, phi 1 minus phi of 0, which is the integral of this. So it is the integral from 0 to 1 of f prime of s y plus 1 minus s eta ds times y minus eta outside the integral. Sorry, could you please? F is infinity of r. So the domain say is whatever, is everything. Thank you. So what the question was, is this in the domain of f? But f, for simplicity, is everywhere defined, OK? And also is c1. So at the end, I have this. Now I apply this to our situation. So let me rewrite it here. So it is y minus eta integral from 0 to 1. So I just copy it s of y plus 1 minus s of eta ds. So now I can apply this equality with the choice y equal to u of tx. Now tx is fixed because the meaning of this is exactly wt at tx plus f of u at tx minus f of v at tx, less than or equal to 0, at any tx, OK? So now what I do is that I write in place of y u of tx, in place of eta v of tx, OK? So I apply this v of tx, OK, so that I can now put everything here so that I have w of tx plus u of tx. So actually, w of tx times integral f prime of s u of tx. Sorry, we lose more space. f prime of s u of tx plus 1 minus s v of tx ds and then x, OK? So let me call just for simplicity this equal to f of u of tx v of tx. So that I rewrite my conclusion, my partial conclusion, as v f of u v less than or equal to 0. So let me keep my new notation. So f of u v is equal to s of u plus 1 minus s. Now, let us do the following trick. Let us consider a function h. So the function h of, so this is the graph of h. This is the graph of h. So h is the heavy side function. So h of tau is equal to 1 tau is positive 0. So h is not 0, so we can multiply this by h of w because h is not 0. It's not negative. Now, I would like to write down this as a derivative of something. And so let us do this. So what is h w wt? So it is wt when w is positive and 0 when w is negative. So here I am not so rigorous because I should then say what happens when w is equal to 0, but it doesn't matter. So this is just a sketch. So this is equal to this. And therefore, it is also equal, almost everywhere. It is almost equal to the positive part of w. Positive part of w, you know what it is. Because so this is the positive part of x. And so indeed, what is this? Well, if w is positive, the positive part is equal to w. And therefore, I have wt. If w is negative, the positive part is 0. And therefore, the derivative is 0. It is almost everywhere. You should accept such a kind of a quality. Also, almost everywhere, say, or less. Now, also, so this is equal to positive part t plus wf of uvx h of v less than or equal to 0. Now I claim that also this is, I can write this as follows. Again, almost everywhere now. So plus, let me check this. So let me check this. So what is this? When is equal to, if w is positive, this is equal to 1. So this is equal to wf of uvx. If w is negative, this is equal to 0. Now let us look to this. It is what? If w is positive, then w plus is equal to w. And therefore, this is equal to wf of uvx. If w is negative, this is 0. And therefore, everything is 0. And therefore, this more or less. OK, here there should be various details because, but anyway, let us accept a little bit this way. So we have this conclusion that almost everywhere, say, in time space, we have this inequality. Now fix any time positive. Fix two points here. Let us consider the following set. So let me call this e. Now I have to tell you what is this b plus ct bar. This is a minus ct bar. And c is a constant that, for me, is c is a constant that, for me, is the maximum of f prime tau such that tau. So we know that u and v were bounded. So we wrote u and v were in infinity, so they were bounded. And so now, the argument of f prime is in between minus capital L and L because u and v are in between minus capital L and L. And this is a complex combination of the values. So what really matters here is something in between minus L and L. And so I can take constant c, say, definition. Take this as a definition. I mean, u and v bar assumptions are bounded. f is c1 everywhere. So in particular, it's bounded on the big box minus LL. And the bound is equal to this c. Now, given that c, I can take lines x minus ct equal constant and x plus ct equal constant. And now I have the set e. And so I can integrate over e this quantity. This is non-positive. And I integrate over e non-positive object. And therefore, this t is non-positive. Now, sorry, I don't have to erase this. I can take advantage of the fact that this is a conservation law. So this is actually a divergence of something. Do you know what is the divergence of a vector field? So if I have a vector field, time component, space component, now the time component, space component. So this is not a derivative. It is the first component, second component. The divergence of eta is d over dt eta plus d over dx eta x. OK? So now you see, this is in divergence for time derivative plus x derivative. And therefore, now we can use a result which allows us to move the integral on a two-dimensional region on an integral of the boundary of this region. Do you know this result? This is the so-called Gauss-Green theorem, or divergence theorem also. Maybe it is better that I recall you what is this result. So for the moment, it's OK. This is quite delicate, clearly delicate result, of course. So let me recall you that this theorem that will be very useful also in the study of elliptic equations, Laplace equation for some equations. So let me write this once for all the following. So let's assume, so theorem, very, very, one of the most important theorem in analysis. So maybe it requires some comment. So let me open a quick parenthesis. One of the most basic stones in analysis is the following theorem. So let omega into rn. So n is bigger or equal than 1. b bounded, open if you want, but more interesting boundary of omega assume it is c1 for the moment. Second parenthesis. So for any point, meaning of the question is what is the meaning of this boundary of omega is c1. So maybe I can tell you this. So this is omega. This is a point p on the boundary. You have some holes everywhere, I don't know. But anyway, if you have a point of the boundary of omega, I say that the boundary is c1. If for any point p on the boundary, there exists a neighborhood of p such that if I take the intersection of this neighborhood in the ambient space, so solid neighborhood with the boundary, then this is a c1 surface, hyper surface. So this is a graph. See, this can be written as a graph locally around. So just this red part is a graph of a function f from some open set into rn minus 1, a graph of f into r, f is c1 with respect to a suitable orthogonal coordinate system. And if you want, we can add this that. So this is the graph of f and the subgraph. So what is below the graph of f is locally this part. So this is the graph of a smooth function of class c1 with respect to a suitable orthogonal coordinate system. So locally, this can be made of pieces of graphs of class c1, of function of class c1. So even if you are here, of course, you are not a graph in this direction, but you are a graph in this direction. So this part, of course, cannot be seen as a graph in this direction, but can be seen as a graph in this direction. And this means that the boundary of omega is c1. No, thank you. This is the next comment. This is a good comment. It's an important question is, OK, now we are stating a theorem which cannot be applied here because this is not c1, but just lip sheets. And indeed, this is the next comment after this theorem. Thank you. So bounded open, and then let eta from omega into Rn be c1, a c1 vector field. Then if I integrate over omega the divergence of eta in the x, the theorem says that this is equal to the scalar product between this. And now I explain the meaning of the symbols. Maybe I use a dot just for being a dot. Meaning of the symbols. OK, this is a back integral. This is a divergence of a vector field. So c1 intersection, let us write this. OK, just to know, for instance, that eta is continuous up to the boundary. So eta is continuous up to the boundary. The divergence assumed that this is a back integral. And so there is nothing to say. But this is more complicated. This is an integral over the boundary of omega. This is a scalar product between eta, which is defined up to the boundary, and the exterior unit normal, the exterior unit normal to the boundary. So nu is exterior unit normal vector field. So at any point of the boundary, I have two normals, the interior and two unit normals. The length is 1, the interior and the exterior is in this. And I choose the exterior, and the length is 1. This is the scalar product in Rn between two vectors. And this is the surface measure. So this, you should know, is not so easy to define, is a surface integral. You have to integrate on the boundary. So if you don't know what is this, now the lecture is over. But if you don't know what is this, I will tell you tomorrow how to compute. Or if you know, I can continue. This is dHn minus 1. The symbol is H because of Hausdorff. It's all called, well, in physics, they use this symbol, this sigma. I don't know which kind of symbol you are used to use. In physics, this sigma is the area element. But more precisely, this is called the n minus 1 dimension Hausdorff measure. And this is maybe the correct way to, I don't want really to insist on this. Well, somebody knows about this theorem? Or just is a new fact for you. It's completely new? Yeah, it is. It is the Green's theorem. It has several names. Gauss-Green theorem. Divergence theorem, for instance. Is completely new for you? No. Have you ever seen this? Maybe the notations are strange for you. Maybe what is really strange for you is this. It doesn't matter. You know what does it mean. It's the surface integral. So do not take too attention. Do not pay too attention to this symbol. If we know what it is, we know how to compute. That's the important thing. So now the lecture is over. The point is that we cannot apply this theorem to this because omega is not C1. But this usually happens also in physics. Because when you apply this theorem in physics, what do you do? Several times you apply it to a portion of cylinders, for instance. Well, this is omega, but this is not C1, of course. But still you apply it. Well, there are generalizations of this theorem which allows to weaken this regularity assumption. In particular, and this, what I will write now is not the most general, but in particular, we can in this form, Lipschitz open set. What does it mean, Lipschitz? Well, it means that locally you are a graph of a Lipschitz function. What is a Lipschitz function? It's something which maybe is not differentiable, but essentially it has corners in the possibly corners in the graph, like this. So this is a Lipschitz curve, by the way. So now the only point that matters is, well, this is OK. But then what is this at the corner? Well, now this is not defined at the corner. You see the exterior normal here is this, and the exterior normal is this. But at this point, the exterior normal is not defined, because the tangent space is not defined. So what do we do about this in this integral? Well, there is a result which says that the number of points where this is not defined is not seen by this measure, has zero measure in this sense, zero measure. This is a non-trivial result. Anyway, for instance, in this picture, what does it mean making a line integral? Well, it's simply you integrate on this open segment, you integrate in this open segment, this open segment, this open segment. You do not take care about these four points. Four points are of measure zero with respect to the one dimensional measure. So I really don't need to take into account these four points. So at the end, we can apply the Grouse-Green theory. So now the lecture is over, and now we will try to continue tomorrow about this.