 It's time to start. So let me remember our notation. So we have a point x in our n, capital N. Here, we do not distinguish a special variable time with respect to the others for the moment. And then we are considering the following PDE sum dI over x u over dxI plus c over x u over x equals f over x. Where we know that the coefficients dI and c are sufficiently smooth. And also, f is given and is sufficiently smooth. And we have given also a hypersurface sigma, a smooth hypersurface. And then we have found a way to solve this problem coupled with u equal u bar on sigma. Say an hypersurface contained in omega, omega in open subset of our n. Then we are considering this in omega, in omega. And this on sigma, and at least for, well, depending on the PDE that we are considering, but at least around sigma, we were able to find a solution of class C1. So at the exercise that I gave you the homework was this. So x1 du over dx1 plus 2x2 du over dx2 plus du over dx3 equal to 3 times u. And then this is, say, in R3, say, maybe. And this u equal u bar, u bar is a function of x1 and x2 because u bar on sigma, which is x3 equal to 3. So u bar actually is a function of x1. So could you please tell me the solution that you have? u bar. u bar, yes? Expand into minus x3. u bar, let me see. x1 multiplied by e to the power minus x3. Minus x3? Yeah. Comma. Comma. e to the power minus 2 x3. And now the plus y is going to 3 x3. So let me check. This is correct. Yes, it's correct. So but I would maybe, was everybody able to do this? Maybe it is better that I try. I do it here. Is it OK? So I correct it here. So this is correct, by the way. So we are in the following situation. We have n equal to 3. Sigma is this hyperplane. By the way, this is, again, in a more standard notation, this would have been written as follows. So this is the u over dt plus in a more standard notation, maybe like this, say d of x, rad u of x, equal to 3 u. Where now the notation means that this is the gradient with respect to x1 and x2. v is equal to b1, b2. So this is, of course, a different notation. But you see, usually such kind of PD are written as follows. But anyway, this is the notation. Maybe more similar to the first lecture. So let me come back to this one. So n is equal to 3. So b1 of x is equal to x1, b2 of x is equal to 2x2, b3 of x is equal to 1c of x is equal. With this notation, c of x is equal to minus 3 is a constant. And f is equal to 0. This is the situation of the PD. Of course, when I wrote here, now I have erased, but this b was a different b with this. You see, now this b has three components. But in this way of writing the PD, it has only two components. But I'm using the same symbol, but it is clear what I'm meaning. Anyway, so b has three components. c is constant, f is equal to 0. So let me rewrite down the system of characteristics. So x dot is equal to b of x. y dot is equal to, now I think it is minus c of x, y, OK? Well, but c of x is constant. And then I have x of z. Well, dot means b over ds, x0 and y0. So this is a vector with x1, x2, x3. This is a scalar object. And now x0, let me call this just a point x bar on sigma. And this is just u bar of x bar. This is the system of characteristics. So now this can be written more explicitly. So x1 dot must be equal to x1. x2 dot must be equal to 2x2. And x3 dot is equal to 1. It is clear, by the way, that the important transversality condition is satisfied. Before doing this, maybe it is better to recall that this vector field, as usual, has, so b of x is equal to x1, 2x2, and 1, which is, of course, transverse to the horizontal hyperplane. Because the sigma is just this. It's a plane, sigma horizontal. And b of x has the third component, which is always 1. So b of x is something here, but never parallel to sigma. So we met the important transversality condition between the vector field b and the given hyper surface sigma. So b is transverse to sigma. Then with difference, could you repeat the difference between a plane? Well, first of all, plane is a sort of special case of hyperplane. So when we are in n dimensions, hyperplane means a flat object of a dimension n minus 1. So it's a generalization of a plane in space. So what is a hyperplane? In this case, a plane, because here we are exactly in three dimensions, it is clear. Now, the hyper surface is a sort of plane, but that is an object is a surface, say, which locally, around any point, is like a plane via a map and embedding map. Meaning that locally, so the definition is sort of this. If at point p in sigma, so this is sigma, it means that locally there is a map from an open set u into, say, if this is n minus 1 dimensional, or for instance, two dimensional, then there is a map phi from an open set over 2, say, into p into sigma such that phi of u is equal locally enabled, say, enabled n intersection of sigma. So it's just a piece, a local piece of your object. But n phi, say, in this case, say, is c1. And it has a differential, which is of maximal rank. So it's sort of called an embedding. You take a small disk, and then you deform it into a higher dimensional space. And then you require that if you do this with another phi, then you require that there should be some compatibility condition between the two local parametrizations. You have to think about a sigma like an object which is locally very, very similar to a plane, just a little bit deformed. So this is not a problem here, because our sigma is really a plane in this exercise. So therefore, x1 of s is equal to x1 of 0. And x1 of 0 is a point, let me call it x1, small x1, x1 e to the e to the s. x2 of s is equal x2 of 0, which is, let me call it, a dx1 bar, so x2 bar e to the 2s. And then x3 of s must be equal to s and must be equal to, well, c2. Because at time 0, s is equal to 0. So this is the solution to our first part of the characteristic system. So this is solution of s and this. Next, we have to solve for y. So you see y of s. So y of s is this c was minus 3. So this is equal to e to the 3s times u bar of x bar. So this is the solution to our second, so remember, the system of characteristics consists of three equations. Now this is three equations. And this is just one equation, vector and scalar. So we have now, remember, the delicate point here is now to invert the map. Because we have the solution x, we have found explicitly the solution, which is a solution of depending on what? Depending on s, of course, and on x bar. And then we have also y, which is a solution depending on s and on x bar. Again, you see the third variable can be identified in the previous language with time. So now we have the following map, say, sx bar into xsx bar, which is a local diffimorphism between rectangle here, sort of minus delta, delta around, in this case, around sigma. And then a tubular neighborhood of its image, and is the image of this rectangle through this map, which can be inverted, by the way. So any x can be obtained as s of x and x bar of x, if you want, in a unique way. So we have, and then the solution, remember, u of x, the characteristic, teach us that the solution is y of s of x, x bar of x. So we have to write down this. So we have to explicitly invert this. So we have to solve the following. So x1 is equal to x1 bar into the s. x2 is equal to x2 bar into the 2s, which 3 is equal to s. And so we have simply to find s and x1 bar and x2 bar in dependence of this. So you see, x1 bar is equal to x1 into the minus s. But s is equal to x3. x2 bar is equal to x2 into minus 2s, equal to x2 into minus 2x3. And s is equal to x3. So you see now, given x, we find s of x, x1, and x bar of x. This is x bar of x. So given x, I have s of x, which is equal to x3, and x bar of x, which is this. And so our solution, so u of x, which is equal to y of s of x, x bar of x, is simply equal to now u bar. So first of all, there is e to the 3s. But s is equal to x3. And then there is u bar. x bar 1 is equal to x1 e to the minus x3. And x2 bar is equal to x2 e to the minus 2x3. So this is our initial condition. And this is the solution of our problem, which is correct. It's exactly the same that he said before. So you see the importance of the method of characteristics. So remember that for all the PDs that we have studied up to now, meaning essentially we have studied the following. Let me change a little bit the notation that I prefer. But we understand what this means. u plus c of tx u equal f. For such a kind of linear first-order PD with this new notation, we are able to solve it using the method of characteristics. I hope that this for the moment is OK. I hope that this for the moment is OK. So now let me do some remarks concerning what it would be interesting to do in connection with this PD and that we will not do. Adjust the comment to give you a large view, a more large view on what are now the interesting problems related to this. Even if this is, of course, this is a linear PD, first-order. So maybe the simplest you can imagine. However, still there are difficult problems. One of the problems is that it becomes rather interesting to assume that b, take also f equal to 0 if you wish. And take also c equal to 0 if you want. So just for this very simple first-order homogeneous and without the dependence on the 0 or just this first two objects equal to 0, it is rather interesting to see what happens where b is non-smooth. Also b depending only on x, for instance. So when b is not smooth, say, for instance, also b depending only on x, just even for simplicity, autonomous case, then what we have set up to now doesn't work anymore. It doesn't work because we were based on the system of what is that we know how to solve it, at least when b is continuous. Right? Even more, when b is locally leap sheets, we have uniqueness and so on. Our case was bc1, actually. So bc1 no problems locally. But if now b is not smooth, for instance, say, also discontinuous or less than continuous, then it is really not clear how even to solve this system of what is. And therefore, the method of characteristic is not clear how to implement it. So this is an interesting, further problem related to this first order PDE. OK, this is a remark. Why it could be interesting to take b which is not continuous? For the following, for instance, for the following reason, we can change and generalize that problem to a quasi-linear problem, for instance, of this form, ut plus u, ux equal to 0. OK, this is not linear anymore, right? Because this coefficient does not depend on tx, but depend explicitly on the solution itself. OK? So this is not contained in this. But assume for the moment that you want to look this as a coefficient. Now, it turns out that this equation in general has solutions which are not c1, even possibly also discontinuous. So if u is discontinuous, and if you look to this as a coefficient, now you see that you have ux against the discontinuous function that you could call b, just for simplicity, even if it depends on you. So you have the problem of multiplying this continuous function against something, which is ux in some sense. And so, I mean, this is somehow understanding this. Maybe it is related to also to this nonlinear PDE. That is why one of the reasons for which it is interesting. Another remark related to this, this equation. So I mean, this means that I'm trying to convince you that there are motivations for a further study of that when b is not too smooth, just motivations. We will not do this here, but there are. OK. Then another remark, which will remain just a remark for the moment, is that assume that we have just in one dimension, say this. So the point is that when b is not smooth, what does it mean to be a solution? Why? Well, because if b is not smooth, we expect u to be not smooth. And so if u is not smooth, what is ut? In which sense, I have to understand this equality. Because up to now, there are no problems. There were no problems, because ut, u was c1. And therefore, ut is continuous. And therefore, this is defined at each point. And the same for the gradient. Gradient is continuous. This is defined at each point. And this means that at each point, tx, this must be equal to 0. So there are no ambiguities when u is c1. But when u is not c1, which is maybe more interesting, what is ut? Well, of course, we cannot talk about ut at any point. Maybe at almost any point, if you are lucky. But not at any point, for instance. And the gradient of u also has the same difficulty. So now you see, when this is not smooth, there is even the definition of solution is not clear. Even the very initial definition of solution. You have to declare what is a solution. Otherwise, you cannot continue. So one possibility to declare what is a solution. Now I am anticipating a little bit. But I think it's good for you because this shares some lights in the theory of PDEs. So it's just an anticipation. So one idea could be, assume that I have this. And let me multiply this by a very small function. So let me multiply this by a function phi, say, which is c infinity. So now I don't couple with any initial condition. This is just a qualitative discussion. So let me multiply this by c infinity function with compact support in space and time, whatever. So let me formally do this computation. I multiply this. Now everything is formal. And then I integrate. Maybe I integrate, say, in spacetime. You see, now if I am integrating in spacetime, I can, using the integration by parts in time, I can write at least this first part as minus. Now this is a double integral in spacetime. But first I integrate in time. So I make integration by parts. And then I integrate in space. So this is minus phi t of u, say. And now, by the way, this is well defined. Because phi is very smooth. Compat support. So when this is defined, at least when u is in l1 lock. I mean, this must be an integral in the Lebesgue sense. So compact support, u in l1 on compacts, so l1 lock, this is well defined. So for the moment, this is clear. The point is that I cannot do this trick here. Because you see what happens if I now integrate by part in space. Now assume we have one spacetime dimension. So I want now integrating space. Now what happens? It happens that now I have u, this. This is not so easy to do. Because if b is not smooth, I have the same problem. So I don't know what is bx. So this is another difficulty related to this PDE. And the point is that this PDE is not written in the so-called divergence form. Anyway, this is, you see, this trick of multiplying by phi and integrate by parts is not clear what I have to do. Because now I have bx phi times b phi x. b phi x is OK, but bx phi is not so easy. So what is this? And so you see, maybe for another equation, u, t, let me just modify a little bit the equation. So let me assume now that I consider another first order PDE, where now let me call the unknown rho. And assume that v is a given function, even not smooth. But now I assume that my PDE has this form, which is slightly different from the previous one. This is called the continuity equation, given v. Assume v is given function. And the unknown is rho. This is slightly different from the previous one. For instance, if I multiply by phi as before, and then I integrate, now the situation is slightly better. Because now this is equal to minus phi t rho. And this is equal to minus phi x v rho. Phi has compact support. Assume that we don't have boundary terms and so on. But now these are well-defined, because phi t is smooth with compact support. Phi x is smooth with compact support. To have a meaning of this is just enough to require that rho is in n1 log. Rho in n1 log can be maybe this product v times rho in n1 log, whatever, say, both in n2, rho in n2, v in n2. Anyway, for this continuous function, this has a meaning. And therefore, it seems to be rather natural to define what is the solution of this. It is an object, a function rho, such that this minus this is equal to 0 for any phi in that class. I'm slightly very anticipating the problem of what is a solution and what could be one way to define a solution. These are called generalized solutions. More precisely, they are called a solution in the distributional sense. In the distributional sense. I am not very rigorous, but I'm telling you that a solution in the distributional sense can be defined as follows. A function rho, say in l2, such that this is equal to 0 for any phi smoothed with compact support. You see, in this expression, there are not any more derivatives of rho. All derivatives are on phi. No derivatives on rho. That is the trick. By the way, this trick doesn't work so easily in that case. Because this is so-called equation in divergence form. The derivative with respect to space is outside. And so I can put this x on the test. The phi is called the test function. I can put this x here. But if the derivative is not outside, if I have this term, this trick is not so immediate. So anyway, this is just to tell you that the theory of linear transport equation is rich and continuous. For instance, in the direction of no smooth vector phi is b. Let us continue toward the generalizing for the first order phi is. So let me again insist on the previous problem. So let me write it as, no, in the old form, equal to f. I want to do some geometrical comments on this pd. So we are in the previous notation in our n. So n dimensions, b equal b of x, c equals c of x. So there is no special time in this way of writing things. So we have the following. So now maybe it is better to be slightly more precise. So this is sigma. Now you see sigma is in omega. This is an hyper surface. We have always considered a hyper plane. But now assume that sigma is not an hyper plane. It's just really also curved. It's an hyper surface. So we know I need an open set here, u. And I'm veriting that, at least local phi. So my notation are to call. So now I have this also minus delta. So this is u, by the way. The yellow is u. The image is this. And then I have minus delta delta. And then my notation, I call this a point sigma, point here, sigma. And s is a point here. So the important map is s sigma into x of s. Which is the difference between this and the previous discussion. The previous discussion was that phi was the identity. Because sigma was a plane. So phi was the identity. And it was identifying this yellow object with its image. But this is just the difference. But now that sigma is curved, I need to locally describe it in some way, for instance, using a parametrization phi. So previous phi was the identity. So we have this differmorphism. Let me call this d. So this is the embedding map, phi. But we have a differmorphism taking this sort of big rectangle, or higher dimensional rectangle, into a neighborhood of this d. And this, let me call this n, n. And so d was a differmorphism which was invertible. And the inverse, the differmorphism was given by this. Capital X was a solution of the system of characteristics. Can you remember? So this can be inverted, and so on, and so on. OK. Now let me, so this is the picture at the level of rn. OK, is it clear? Is everything clear? It's very important to have, to write explicitly, the solution of the system of characteristics x dot equal b of x of 0 equal a point here, parametrized, so phi of sigma. So x of 0 is here, and this point is coming from a parameter here. So this was called x bar, but x bar is the image through phi over parameter. So let me call it. So now everything is considered as a function of s and sigma. OK, so x, is it clear? This is very convenient. It is convenient to have a fixed parameter space in a flat parameter space rn minus 1. And then I have an orthogonal component, s, which is the time of parametrization of each orbit. In the physical space, I'm starting from here. This is a point x bar. It's coming from here. And then my vector field was, by the way, transverse. So my vector field was transverse here. And then my capital X are just curves, integral curves of the vector field. This was the situation at the level of rn. Now let me rewrite this PD slightly differently. Of course, the equivalent way, cu minus f equal to 0. Or maybe let me rewrite it in an obvious equivalent way, minus f minus cu equal to 0. What is written here is that the scalar product. Now, again, we have to be a little bit, we have to understand on the role of the symbol. I mean, this dot is the scalar product in r capital N. Now let me write this b against grad u minus 1 equal to 0. Where now b is equal to b f minus cu. Namely, more precisely, b of x and y is a new vector field with one component b of x f of x minus c of x y. So this is rn in rn. This is scalar. And so given b f and c, I can introduce a new vector field in a higher dimensional space. So capital B now is a vector field consisting of a horizontal component, which is this n dimensional, and a vertical component, which is a number. So I'm increasing the dimension because of u, essentially. And then it's somehow natural to define this capital b. This dot here now is the dot in rn plus 1. Sorry for the same notation, dot in rn plus 1, but it is clear. So you see now that this can be written as capital B is now b and this. And now against this new object, grad u minus 1. And you see that this is equivalent clearly to this. Do you agree? But now this is a geometric condition on the graph of u. So maybe it is better that I'm trying to do a sort of picture. So remember, this picture is at the level of rn. Now I have to make a picture in rn plus 1. So I erase this, but you have to understand what I'm doing. So now I need the picture in rn plus 1. I will try to do this in a sort of understandable way. If you don't understand, please let me know that we try to adjust the picture. So I try. Now, so here we have the variable y. Now I am taking, say, n equal to 2, maybe. And this is sigma. Sigma was yellow, so this is sigma. So for simplicity, sigma is just a piece of hyperplane. In this case, we are into dimension. So n minus 1 is 1. So this is a piece of line. It's not curved anymore for simplicity. You should think about a curve here. But for simplicity, for making the picture, I take just a piece of line. Now we have, remember, the vector field b, that here it is horizontal. So this is the vector field in the horizontal plane. It was b, small b. So let me omega is here somewhere here in this horizontal plane. So the previous picture, which was in the blackboard, now must be imagined in this horizontal plane. This is the important thing. The previous picture, above view, now becomes a picture in the horizontal plane. Now this is b in the horizontal plane transverse to sigma. And over this, we have the graph of u bar. u bar was the initial condition. So now let us try to depict the capital b now. What is the vector field capital b? So we are now in one dimension more. So b of x, y is equal to b of x f of x minus c of x, y. So we see the capital b is somewhere here around this. The horizontal part of capital b is the same as small b. But then there is some sort of vertical part also. So maybe capital b has a sort of vertical part. So say, I don't know, something like this. By the way, capital b is non-vertical. I mean, b is never 0. So this is never vertical like this, capital b. And moreover, it is capital b restricted to the graph. This graph is non-tangent to the graph. This capital b, I repeat, if I look at it, just capital b is defined everywhere in this big space. I am interested in it locally around this. But if I restrict it on this graph, the graph of the initial condition, this is non-tangent because small b was transverse to sigma. It's non-tangent to the graph of u bar. OK. What this equation is saying, what is this? So u is a solution now. Assume that we have a solution. We have constructed it, by the way, OK? What is saying this condition? Can you see it? What is this in terms of the graph of the solution u? Is the is a normal vector. Maybe it's not of length 1 because this is not normalized. But it is parallel to the normal to the graph of u, OK? So minus grad u, so remark, minus grad u 1 is parallel to a normal vector to the graph of u. So this equation imposes something. It imposes that b is tangent to the graph of u. This says capital B orthogonal to the normal. Equivalently, capital B tangent to the graph of u, OK? So b tangent to graph u on graph u. So our solution u must be equal to u bar on sigma. And then must have a graph such that capital B is tangent to this graph. So I'm trying to now maybe to, I don't know if this is clear or not, this is a local part of the graph of the solution u. So this is capital B. This is capital B. And this is the graph of u. So this graph is obtained as a union of curves, which are essentially integral curves. So let me rewrite the x dot equal to b of x, y dot equal to f of x minus c of x, y. So the solution to this, there are curves into Rn plus 1. And you see that the tangent to these curves is exactly the vector field capital B. So these are the integral curves of the vector field capital B. Because this is right inside this capital B. So these are the tangent vector to the curve, tangent equal capital B. So this means that the curve has a tangent direction, which is capital B. And therefore, the graph of the solution is just the union of these curves. So the point of the, so there is a geometric interpretation. So you have given this graph. And you want to find another, so this is however co-dimension 2, because this is a curve in R3. And then you look for a surface in R3, which meet this path, because you have to start from here. So you have to touch sigma, this graph of you. Because initially, you have the condition this. And then you have to have this property that the tangent, that the vector field capital B is tangent to the manifold, to the red manifold, which would be the graph of the solution that we have defined. OK, so this is the graph of you. No, this is no, sorry, this is graph of you. No, this is graph of the initial condition. And this is the graph of you. OK, this is the geometric interpretation of our first-order PDE. This is maybe, so is it clear? Now it's not so easy. The important is that the end, the transversality condition of B here, so that we can really somehow, starting from this curve, continue it into a surface with properly suitable properties. Everything, I mean, you can ignore this geometric picture and solve the characteristic systems. But maybe it is better to have also this in mind to do both at the same time. Is it OK for the moment? OK, this is important because this is the starting point of the next nonlinear PDE. So we pass now to another case, more general, that we write this as follows. So if this is clear, I can continue. I know it is not easy. I know that this system of characteristics is not easy. In particular, this map that you have to invert. But anyway, so now let us now continue by considering the following more general PDE. So let my notation like this. Now I from 1 to n, B i of x. Now B i of x and u of x now times u over dx i of x. OK, and this must be equal to f of x u x. So now this is a non-trivial generalization of the previous case because our coefficients now depend on u and not only on x. And therefore, the equation is nonlinear anymore. Maybe this is called quasi-linear because at the end, you see, the coefficients depend on u, but they do not depend. I mean, the nonlinearity is due of this. It's not the case that you have the higher order derivative square, for instance. This is not the case. So this is called quasi-linear, not really linear, but not nonlinear in the higher derivative. Let me say like this. So if you look, if you freeze this, this is still linear in the higher derivative. However, global is nonlinear in u because the typical case, as I told you, is in the other way, time, space. The typical case that you have to consider is this. Say, OK? And this has also a name. Maybe another way to rewrite this is now. So it is nonlinear. Let us rewrite it like this. However, it has a sort of divergent structure now. Very convenient. This has a name that we will study a little bit. It is called the Berger's Equation. Of course, this equation is a particular case of this. So let us try to say something about this. So we have the previous notation on the diffeomorphism, the embedding map phi, and so on. And so what are now the assumption, transversality assumption? Because now, if you look at the system of characteristics, system of characteristics now is this b of x and y, y dot equal f of x, y. And then there are the initial conditions. So x of 0 is equal to sum phi of sigma. And y of 0 is equal to sum u bar of sum phi of sigma. OK? So this is now the system of characteristics. This gives us, if we are able to solve it, it gives us solution x of sm sigma and y of sm sigma. Now, which is the difference between this system of characteristics and the previous, the linear case? Well, we can see it. These are coupled now. Really, they are coupled. Because in the previous case, first we solve for capital X. And then we found y once we know capital X. Because in the previous case, y was not present here. In the linear case, there were no y. So this was depending just only on x. We solve it. And then we put it here, x, and then we solve for y. This was the previous idea. But now you see what happens. In the nonlinear case, unfortunately, we cannot really do this. Because now b depends also on y. So they are really entangled. They are really not coupled. You cannot decouple it so easily, at least. Can you see? So since now here there is both x and y, and here there is both x and y. So this is a system. You have to see each time what you can do. So now we assume on b, f, whatever, that we have a local smooth solution to this system. No, we assume that b is a c1, f is c1, phi is c1. So we have a local solution. We are not interested in the difficult case of this continuous right-hand side. So we have a local solution. But anyway, the point is, how can we state the transversality condition, which in the previous case was just at the horizontal level? Previous case was b transverse to sigma. Now b depends on x and y. And so what do we do? Well, it is natural, in my opinion, to reveal the previous discussion to say that on graph of u bar, capital b is non-tangent. Can you remember the previous discussion? We observed that this condition in the linear case was also saying that this was not tangent to the horizontal part of capital b was not tangent. So we take as a transversality assumption that we take that b of x, y is non-tangent on graph of u bar. On graph of u bar. So meaning that when you put here u bar of x, u bar of phi of sigma, say, then this vector field, just red on this, is non-tangent. Fine. Now, again, our solution, I claim that our solution is, again, in the usual form. So I claim that I can construct one solution, one local C1 solution, again, by this formula, u of x is equal to y of s of x sigma of x. So claim 1, OK? The claim is this. So the usual formula. In this more general case, even on linear, OK? I claim that, and still, we have the change of variable and so on, et cetera, et cetera. So given x, we have s of x and sigma of x. And the claim is that this solves the PDE. Before trying to do this computation, let me write more explicitly what does it mean imposing this transversality condition. So I'm sorry, now I have to erase the picture. So let me erase it. This can be checked as follows, which is a basis for the tangent space to sigma at a point. So you have sigma. Now I'm coming back to the previous picture. The picture number one, not number two. So this is sigma. Or maybe it's not really even necessary to do. So which is a basis for the tangent space to sigma? You have a parametrizing embedding map, phi. Phi is an embedding. So which is the tangent space? Which is the basis at each point for the tangent space? Phi is given. So you have given phi. Remember, phi takes some u into rn and is a local parametrization for sigma. Well, this is in rn minus 1. What do you mean by phi prime? Yeah, exactly. So say d phi over d sigma 1, that we write it like this, d phi over d sigma 1, that we write it like this. Phi over d sigma n minus 1, this is a tangent vector. Phi, remember, phi is phi 1, phi n is OK. Because phi goes from here into a big space, rn. So phi has n components and n minus 1 variables. So I take, say, r2 into r3. I take a piece of r2 into r3, and then becomes a piece of surface. This is phi. And then I have this piece of first surface physically a point here. Then I want to find the tangent space in the physical space. This is generated by two vectors, because it's a two-dimensional now. And each of them is the derivative of the embedding map with respect to one of the, or one of its, variable. The point is that, if I look parametrize, if I, now I have used in the language of parametrization, I can think that I have a parameter here. And I talk about the tangent space here as being the tangent space in the image. This is an embedding. I can't identify this with its image. Anyway. So this is a tangent vector at the image point. The entries of these are points here, of course. But this physically has a meaning of the tangent vector at the image point. This is just first elementary effect in differential geometry. So this is a tangent vector. This is another tangent vector. So these are n minus 1 tangent vectors. Assumptions on phi embeddings, immersion, it says that these are linearly independent. This is an assumption. When I write, when you see what does it mean to be a surface, a hypersurface, this means exactly that these objects are linearly independent in a neighborhood of your point. So this generates, in other words, this generates the tangent space. So now I want that this is non-tangent on graph U bar. So now I take the matrix d phi over d sigma 1, comma, comma. This is a column. I have n minus 1 column. But the column has n components. So I have to take a square matrix n times n. So d phi over d sigma 1, d phi over d sigma n minus 1. And then I have b of sigma U bar phi of sigma. And I have this big matrix. OK. And this big matrix, so now it is suddenly, it is n times n. Because this is another vector with n components. So n minus 1 plus 1, we have n columns. So this is a square matrix. And saying that this is non-tangent means that this must be, the determinant of this must be different from 0. So this is a way to say that for any sigma in U, say, it's our new transversality condition. A little bit more difficult than the previous one. So this is the new form of the transversality condition. So maybe it is better to try to do an exercise because maybe it's not so easy to understand this. So before doing the computation of the, before proving our claim, let us do an exercise together. Because I mean, so the exercise could be the following. So this is after the exercise. I leave this after the exercise. Or maybe if we have no time, because we do the exercise, I leave you as homework that we will do together tomorrow at the beginning if we don't have time. OK. So before doing this, let us do the exercise. Just to understand what we are doing, maybe it is slightly better. So the exercise is the following. Solve u du over dx1 plus du over dx2 is equal to 1. And so this is x1. This is x2. And then on this sigma, u equal u bar, OK? And u bar is equal also to what? Maybe u bar is equal to, yes, u bar of sigma is given also. Initial condition is 1 half of sigma. So which is the situation? Let us try to understand. First of all, this is, if you change the name to the variable, this is du over dt plus u du over dx equal to 1. So this is Berger's equation. du over dt plus u ux equal 1. If you call x2 equal to t, remember that Berger's is ut plus u ui equal to something. So if you call this ut, this is u ux, when x is one-dimensional. So anyway, this is, but usually ut plus ux is coupled with the condition on sigma, which is, and sigma is times 0. So now we have a slightly different situation, because sigma is not here, but it is here. So now, let us try to slowly understand this example so that I think that you can understand better the role of the symbols. So first of all, we are in, of course, n is equal to, this is clear. So sigma is an hyper surface in R2, of course, because it's a segment. So it's one-dimensional in R2. So sigma is one-dimensional. And of course, it's a smooth hyper surface. Obvious is a segment. So more smooth than a segment is impossible. So u bar, now, who is phi? We have to find the embedding map. So this is our manifold, sigma, in the physical space. But we need the embedding map, phi. So we need an open set contained in R, because n minus 1 is R. And we need the map phi, the map phi. Who is phi taking? Who is capital U? And who is phi? Omega is something in R2, is something here around. It's an open set like sort of this. This is omega. Of course, u is equal, say, 0, 1. So this is the parameter space. And now, so phi must go from 0 to 1 into R2. And the image of phi must be your sigma. OK, so the parameter we call this sigma. Now, sigma sigma. Do you agree? Obviously, it takes any point in this interval into a point on sigma. Is it a parameterization, a regular parameterization? Of course, if I differentiate this with respect to sigma, this is 1, 1. Therefore, it's never 0. Therefore, the Jacobian is a maximal rank. In this case, the matrix is just a number. 1 times 1 matrix is non-zero matrix. The parameterization is regular. And the vector 1, 1 generate the tangent space. Obviously. So this is a very simple case, just one dimension, OK? But just to understand, is it clear? Is it clear? No, this one. That's why I'm trying to do first the exercise, so that if you understand the exercise, then you can translate the claim or the theory looking first at the exercise step by step. Because in the theory, sigma is n minus 1 dimensional object into our n. So it's much more difficult. And you have an embedded map, which has several components and so on and so forth. Anyway, let us try to understand at least this example. Now, let us check our transversality condition. So the determinant of what? Now, we have already computed this. We have just 1 here, because n minus 1 is equal to 1. So there is just this. B. I have not written who is B, maybe. So B of x and y, sorry, who is B of x and y? Is it clear? It is clear, because if I remember, I have to write B of x, u of x, B1, u over dx1 plus B2, x, u of x, u over dx2, equal to 1. So if B1 of x and y is equal y, I get exactly this product. And if B2 of x and y is equal 1, I get exactly this 1 here. Is it clear? So B of x and y is this. Now, I can write it also capital B. We don't need it, actually capital B. But capital B is 1, y, 1, 1. We don't need the moment capital B. So let us write down the transversality condition. Transversality condition. So what is this matrix? It's a 2 by 2 matrix, because n is equal to 2. And therefore, we know that this is an n times n matrix. And therefore, we need a 2 by 2 matrix. And what is this 2 by 2 matrix? Tell me what is this 2 by 2 matrix? The first column is 1, 1. That is, this is the tangent, the vector spamming the tangent space to sigma is just one dimensional. And then I have to write down B of sigma u bar of phi of sigma. And u bar is this. U bar of phi of sigma. U bar of phi of sigma. Yes, because u bar is defined, say, at point physical space, x bar, but x bar is coming from sigma here. So let me write u bar of phi of sigma. It has to be extremely precise. Very often, one identifies capital U with its image. But if now, if I want to be really very, very precise, I have to not to identify it, considering the embedding map phi and so on, OK? So which is the second column of this? Exactly, because this is B of x, y. So 1 half sigma and 1. Is it correct? Let me check. Not 1. Ah, yes, 1. No, B of sigma. Yes, u bar of sigma. 1, thank you. So now the determinant of this is 1 minus 1 half sigma. And the determinant of this is non-zero for any sigma between 0 and 1, because our u was 0, 1. So this is never 0 in this u. 1 minus 1 half sigma, OK? Difference from 0 in our interval, 0, 1, OK? Therefore, we can start the theory. We have transversality. And we can at least write down the system of characteristics. And so x dot equal to B of x and y, x of 0 equal to some phi of sigma, y dot equal to 1, and y of 0 is equal to u bar of sigma. And we have to solve this to obtain. So x dot, x1 dot, must be equal to y. x2 dot must be equal to 1. y dot must be equal to 1. And then we have the initial conditions. Phi of sigma was sigma sigma. And this was 1 half of sigma, right? OK, so first of all, x2 of sigma, x2 of s and sigma, by the way, must be equal to s, s equal to 1, x2 of 0, is equal to, so it is s plus sigma. Now, x1 of s and sigma, first of all, y of s and sigma must be equal to, again, s, but now plus 1 half, sorry, plus 1 half of sigma, OK? And x1 of s and sigma now is y. You see now they are coupled. This is an example in which we see exactly the coupling, because for solving for x1, we need to know y. And fortunately, y was independent. So now we can put y here, which is s plus, so x dot 1 is x plus 1 half sigma. This is x dot 1, which gives us also x1 by integration. So x1 of s and sigma equals, sorry, I raised this now, so which gives us this, plus this, plus now, plus sigma, maybe, because for s equal to 0, it was sigma, right? OK. We are in a good shape, because we have, actually, we have the map s sigma, the explicit map x of s sigma. Which is s squared over 2 plus 1 half s sigma plus sigma, comma, s plus sigma, OK? So the point now is that we have to invert it. And there is no time anymore. So please, homework, we will do this at the beginning of the lecture of tomorrow. So even if you're not able to do this, we do tomorrow. So homework, well, write the solution. So invert this, write ux, and then prove the claim. The claim was that, in general, the solution ux is y of s of x sigma of x, OK? We'll do this tomorrow.