 So last time we talked about linear algebra, time before we talked about linear algebra. I'm going to show you now no linear algebra. Well, at least at the amount that we did, which is you know what a matrix is, you know what a vector is, you know how to combine them, you know how to work with them to some extent, you have at least eight ways to find the matrix. You know that not all matrices are invertible. There's more, a lot more that you could know, but we don't have time to talk about all in full detail if you want. You can read chapter three, which, wait, three? Three or five? You can read chapter three, which is covered in 308, but if you know all the stuff in chapter three, that's good. It's about vector spaces in general. We're only focusing on a specific vector spacing, ORN, and we're not covering all of the applications of vectors in vector spaces like eigenvalues and eigenvectors. We will come back to some of the ideas of basis and change of basis in a bit, but I think we've covered enough linear algebra that we can go back to doing some calculus. So the real goal is to understand at some level functions just say smooth, find what smooth means, in maybe the same sense, supposedly, quotes here because it's the same meaning, functions from R to O. So by virtue of taking single variable calculus, the level at which I want you to understand these functions is about the same level at which you currently understand a function from the reals to the reals, although by looking at these more general variables, hopefully that will also enlighten you a little bit about some of the things that went on here. The main goal of multivariable calculus is to take all of the ideas in your single variable calculus here and translate them to the more general function from one from many variables to many other variables. So maybe, you know, so as an example of a function from say R2 to R3, this would be something like, well, we put in two numbers and outcome three numbers, you know, for each pair of numbers x, y, we get a triple of numbers x, y, z. Right? So this is the same thing as saying we could also write this as x1 squared plus y, no, no, no, x2, x1, q, plus x2, q, x1, x2, we could also write it that way, exactly the same thing just with the substitution. We could also write this as y1 is x1 squared plus x2, y2 is x3, x1, q, plus x2, q, and y3 in x1, x2. Those are all different ways of writing this same thing. And we can't really, I mean, you alright? Multivariable calculus. Yes. So here's the multivariable part. We don't get the calculus. So far I've only have two multivariables. Another word in that phrase. I haven't covered that yet. So this is just an example. You're in the right class. Okay. And then of course just like in this case where you might have f of x is x cubed, that's fine. This is defined for all real numbers. But we can have another case where say g of x is square root of x, which is only defined for non-negative numbers. If we're restricting our attention to maps into the reels. So we might have to restrict the domain. So maybe we want to consider and I'm just going to keep writing f. That's not a d as in a disc. Something like that where d is some subset of the reals domain. So you might have a function like, I don't know, f of x, y is, oh, let's do the square root of x, y. And in this case I don't have to write the domain. If I don't write the domain then we can figure it out or I can tell you the domain. But if I don't write it then here we know that x, y has to be positive, well not negative. Or that means that it's defined here and it's defined here on those lines but not here. Because if either x or y is negative then we have a problem. So just as in single variable calculus sometimes the domain is important and relevant so too in multivariable calculus. We're getting closer to something. And then of course we've already talked about so a special class of functions are the linear functions which are the ones that look like, well let me do an example here, I don't know, r2 to r3, which would be something like ax plus b, I'll use the actual number. So this is the same thing, f of x, y, z is maybe 3x plus 2y plus x plus y, x. I don't want to do something like that. So this is a function r2 which we can write this following that same nonsense there. We can write this as, let me write it with x1, x2, and x3 rather than x, y, and z. So that's telling me, so this I'm going to think of as y1, y2, and this I'm going to think of, x1, x2, and x3. So here I'm saying that y1 is 3x1 plus 2x2 plus x3 and y2 is there somewhere x1 plus x2 and no x3. It's just another way of writing the same thing. But we've seen this in terms of matrix that's saying that we can rewrite this thing as saying 3, 2, 1, 1, 1, times x1, x2, x3 is y1, y2. So these are all different ways of writing the same thing. But here the connection to linear algebra is maybe a little more apparent. And so the idea of having to look at matrices, well here we see that an n by m matrix corresponds to a function from rm to rn. A linear function. Which is why we looked at n by m matrices because these are things we need to do to talk about them. And just like in single variable calculus, so we have some graphical function of y equals m to the x, we want to at some point a, we want to approximate this guy by his tangent line. This guy, oops, now I can't tell who this is. This guy is a linear function. It's actually affine from, let's call it df. Let's call it f prime. From r1 to r1. Well, plus a translation. I have to sort of lift it up. So it's based at the origin. I'm thinking of this as my origin. I'm thinking of my origin. This is not news to anybody, right? We need to sort of adapt this kind of, these are supposed to be axes. I don't know. They also look like axes if they took a lot of like it or something. But they're supposed to be axes. This kind of an idea to this kind of a situation. And that's really a lot, that's a goal of maybe the first half to do around it. That's what we're going to do for the next couple of weeks is expand upon it. It's hard to just jump right in with functions from rn to rm. I mean, I can do it, but we'll know what the heck's going on. So let's not. So let's specialize to the easy cases. So we already know this one. So I'm not going to say anything about it. But there's the easiest case. The next easiest case, or an obvious easiest case, is to now consider functions from say r1 into rn. And really for consistency, you may notice that I kept transposing n's and m's over there. n was in the domain and then it became in the range. And now it's in the range. It's just easier to write n's than m's. So you have to figure out which one is which. But this is the next easiest case. So let's think about functions from r1 to rn. So that would just be rewriting this stuff where I only have one variable in. And just to not confuse you, we often call it t. This is to also sort of remind you that you're already familiar with these kinds of things if we think of t as time. So we might have something like this, which we can also write, again using that sort of a thing, x equals t squared and y equals t. You've seen this kind of thing before, maybe not with a general function t squared and t, but you might have seen it with polar coordinates. You might have seen it in some other variables. These are parametric equations. This is my, I'm thinking of my t. I'm going to call it a parameter. And then, which I should think of as time. And so this is describing, in this case where n is 2, this easily describes a curve in the plane, but not just a curve, but a way of traversing the curve. So in this particular case, in this particular case, what's this curve going to be? A sideways parabola. Right. Yes, it's a sideways parabola. How can we see that? But t is 0. We certainly can't get the dots. You can just plug in y for t. I want to sort of pretend I can't just plug in y for t. Right? So here I can just plug in y for t because y is t. So why not plug in? But, so anyway, it's a sideways parabola like that, but there's a little more than just a parabola here because it has not only an orientation, but a parameterization. It has a preferred way of traversing the curve. So at t equals 0, it's here. At t equals 1, that's supposed to be a square. This is at the point 1, 1. This is t equals 1. At t equals 2, it's here at the point, this is at point 4, 2, 1, 1. And then here is also t equals minus 1 and so on. And so this says in the first time you come in quickly and then go out. Right? It travels along this parameterization. The y value moves at a constant rate, but the x value decreases very quickly, slows down, and then starts increasing. And this, I mean just here I guess. This curve here is the same but different from curve, let's call it g. g of t equals t to the fourth, oh, I don't want that one. Okay, so let's think about that one. Suppose I have this guy, g of t equals t to the fourth t squared, which is the same thing as saying x is t to the fourth, y equals t squared. And here if I just plug in and try and get rid of t, I actually get extra stuff that I don't want. Because if you think about this, this is saying, well first notice that here, this is always positive and this is always positive. So I only get this much of the parameter. I don't get this part. And furthermore, if I think about the negative values and the positive values, it's the same. So really what this is describing is in negative time this thing comes in, goes here, and then goes back out. So for t less than 0, the thing goes this way, the thing, the point. Think of having a point, which is my pen, which comes in quickly, stops here, and then goes back out. So this parameter is adding extra information. You should think of driving a little car on this path and this is how he drives. He drives really fast and he slows down and stops, turns around and he goes back. And you know, you drive this up in lots of ways and so on. But it's important to realize these guys have the same graph that's supposed to be just saying parabola. Doesn't quite look that way, but it is perverse in a different way. So this gives us maybe another example. So the graph, any function f of x in R2 is just parameterization, let's call it g. g of t is t, if I can get any graph just by that parameterization. But this implies that x is moving with human speed when I write it this way. And I might want to write it a different way to make x, to make maybe the, you know, if you think of this as something you're driving a car on, then the car is going really fast here, but not so fast here. And so in some sense it's the shadow of an airplane flying over the top rather than that. And we might want to, we could re-parameterize this in some way. And this is useful in many cases. We could parameterize by the distance traveled. Why we get some other parameterization that looks like, let's call it x of the s, y of the s, where the arc length here is always equal to the distance between the points. We'll come back to that. Is it clear the difference? I mean, I didn't figure out the parameterization by, well, I have a general function here. I could write the formula, but that's nonsense. Is it clear what I'm saying? So here, this parameterization g is the shadow of a point moving with human speed, whereas this parameterization, let's call it h of s, is something that moves along this graph where the points here that are equally spaced in s are equally spaced along the curve. So usually if we use s for arc length, this is there. Okay? So this is a useful thing to be able to do in many situations. I mean, you might imagine if you're trying to draw a nice curve and you want to sample this graph, you want to pretty much equally space the functions here. You can equally space your sample points. So you want to go faster. You want to put points closer together and x is becoming steep. There's certainly a relationship between this and the derivative of the curve, derivative of the function f there. But this is obviously, how many people have worked with parametric functions before? And let me ask the inverse question, how many people have not worked with parametric functions before? Okay, I'll just ignore you guys. So because of this extra parameter t, we can still have a function that doesn't pass the vertical line test. This guy is certainly a function, but it's a function of t. We have to keep track of time. It's not y is a function of x. xy is a function of t. So the vertical line test that you learned in, I don't know, ninth grade or whenever you learned it is still there, but you can't see the line anymore. And we can also have paths which cross themselves just like I can cross where I was but I'm here at a different time. You and I can occupy the same space as long as when I go there you've already left. So if you get up I can sit down in your chair and in terms of the parameterization we're in the same place but I'm there later than you. So that's something to keep in mind when we're talking about functions that something like this is certainly a perfectly good parametric function because I could just draw it. So every point here is well defined as long as I remember where my can was at any given point. It's a perfectly good parametric function. It's just a little hard to follow exactly what's going on. This also gives us, I mean I'm mostly focusing on functions in the plane just because they're a lot easier to draw but I can certainly think of this as functions in space or even functions on the line. I can have, well those are stupid but in some sense what you did in single variable calculus is you study parametric functions from R1 to R1 which you didn't usually think of in that way. You don't think of the graph of y equals x squared as looking like this. That's a parametric function of the line where I came in quadratic and stopped at zero and took off. It's just that whole graph is a little hard to see. So what do I need to say? So I want to make sure I didn't skip anything that I need here. Okay. Usually in order to do calculus you do something with limits. So suppose I have, and again I'm just currently focusing on a function into Rn which is going to look like f of t equals f1 of t, f2 of t, up to, right? In those examples that I gave over there, t, n was 2 but certainly you should be able to think of this as n being 5. Something else? And what would probably be? This is the same as saying let's just write it out. The limit as t goes to t naught of each of the component functions, that's certainly I didn't do anything except write it again. And since these guys don't depend one on the other, this is just taking the limit of each individual piece. If I want to take a limit of a parametric function of one variable, I can just take the limit of the individual components. Because certainly from drawing the two dimensions, what does this mean when I say the limit, and that was supposed to be the limit as t goes to zero say of t squared plus 3 t times, let's just do sine of t over t. So this is a perfectly good function, perfectly good, perfectly good to me. So this is going to describe some curve. There's a little problem when t is zero here. Why? Sine t over t, I mean we know it has a limit but I can't plug in when t is zero. But I would like it to be one. I would like to claim that this is that point. But 3 1, the function is actually not defined because I can't plug in. But I would like to plug in there. So this function is going to, I don't think, comes in with one and a y value. I don't think it does this kind of thing. But I want to fill that in. And this makes perfect sense that I can just pass through because what it's saying, I mean if you think about what we're really trying to say here, the graph of the function, the trace of the function is going to look something like that at some scale. And, except my slope is wrong, slope two, but you know, to some scale I guess it's right. But what it's really saying when we say the t goes to zero of this, we're saying as we get closer and closer to zero to t equals zero, the point on this graph comes into this guy. But that's the same as saying if I look at the shadow here in the x's, as t comes into zero, the x values are going to get close to this guy. And if I, and also from here, and if I look at the shadow in the y's, they get close to this guy. This is not a very, if you want, I can turn this into a proof with epsilon's and deltas, but I'm not going to. I mean, the proof with epsilon's and deltas is just saying that, let me, let me draw it a little steeper. Let me do a geometric proof with epsilon's and deltas. It's saying if I take a box like this where the width here is epsilon one and the width here is epsilon two, then the location of the point, oops, I have to take a box here in the x values which traps it there, I have to take a box in t, I'm going to take a box here, that's the same as taking some, I need my red pen, that's the same in this case provided the limit exists as going away some distance on the curve from this point. And as I shrink that distance on the curve, which is my distance in t, to zero, this box shrinks to a side of zero. Therefore the limit down here, that's exactly the same as if this width shrinks to zero and this width shrinks to zero, which means that the limits are the same. So we can take the limit by just looking at the limits of each component, provided the limit exists. If the limit doesn't exist, if we change this to something like, because this guy is going to come in the x values into three, but the y values are going to shoot off, let's see, if, so at zero, which side is t? I don't think it looks like this, right? Because the x values come into three in a nice way, the y values for this is t less than zero and this is t bigger than zero shoot off to infinity. So we don't have a limit as we come into t equals zero. Or, you know, of course I can make this even nastier where it wiggles all around, whatever. So we have the same issues that we have in one variable, we can just look at them one variable at a time, okay? So for functions from r1 to rn, it's easy because it just breaks up into n different limits. There's no interrelationship among them, so we can just treat them individually, which means that now we can think about something like what a limit, let me write it this way, a limit like that would mean, which is very simple, right? So again I have this same function from r1 to rn, so this is really, you have to remember, this is a vector, this is a vector, right? Because f takes r1, rn, so I have a vector of n variables coming out. So this is the same thing as saying the limit as h goes to zero of f1 of t plus h or h. Let me just write the one over h f1. Now let me write it this way. Well we know how to do this. This is just f prime of t1, f prime, second, fn prime. But this is a vector. So in this example over there, g of t, t squared plus 3 sine t over t, then I can define, let's call this the derivative, I still want to call it that, the derivative of f of g is going to be 2t cosine t over t plus sine t over t squared minus, going to be something like that, which is saying, well let's say, and so maybe, I guess I can write these things. I'm going to call that g dot. I could use this notation, but I don't like it. Oh I know, I can call it d dt of t. I'm okay with that. So I have a problem with this notation. I don't know, I can't think who's the vector and who's the prime and who comes first. So I'm not going to do that. So I'm going to reserve putting a prime for single variable derivatives. But this notation is quite common, putting a dot over it to mean this is a vector derivative. And two dots is a vector second derivative and when do you stop doing dots? When you stop doing dots when it's too hard to count them. Nobody does something like this. So this notation is common in physics. This means the velocity vector and the two dots is the acceleration vector. And hardly anybody in African physics looks at the third or they do, you know, you don't need the jerk vector. It's not nice to call it a jerk. So anyway, I'm going to use that notation. What does this say? Well, this is saying exactly what you're thinking it's saying. I'm not going to draw the graph of that function. But if I have some parametric equation, f vector of t, describing this, that this is telling me infinitesimally how fast I'm going, that's supposed to be tangent. It's telling me not only, this is telling me not only in which direction the function is changing, but how rapidly it's changing. This is a parametric equation, so I put a little hat over it. Why do you have a tangent line there? Is that plotted on the x, y axis or something? Because I feel like if you get the derivative of a parametric equation, you could just draw a tangent line into the function. I could, but okay. So what's the point? This space is in the plane. So this looks like f1 of t, f2 of t. It's parametric in that way. So this isn't giving me much, but it is giving me something else. Because it's not only telling me the slope, it's also telling me the speed. Alright, and the tangent line just gives you the x. It just gives me the ratio. So here, for this guy, so suppose that, I guess I'll write it here. I guess we can do, you know, with that, let's use this example here. Let's take this, I mean, instead of doing that, let's look at this example. Wherever it went, it went away already. But I'll draw it again. Suppose I have g of t being t squared t to the fourth. We know this. This is the upper half of the parabola parameterized forward for positive t and backward for negative t. Probably yes. Thank you. And that's a vector of g. So that's my parametric equation because otherwise I had the other parabola. We can do the other one too, but this one's good enough. And now, if we look at what is the derivative at one, we know that this is the same in some sense, no, x equals y squared, which is the same y equals positive square root of x. So those are all the same graph, the same curve. But I get slightly different information here. Here, let's call this function f of x. And so f prime of x is one half, one over square root of x, here. And f prime of one is certainly a half, which tells me that, if I'm thinking of it as a graph, here at one, this slope is one half. Yeah, so that has nothing to do with the speed because you have nothing to do with the speed, but it does have to do with the direction of the tangent vector. Yeah, it's in turn, it's like d over dx in my case. Right. Well, in fact, this will be dx dt over dy dt. Yes. dy dt over dx, sorry. Right, which is dy dx. And here, if I take this derivative vector, if we look at respect to t first, that will be, okay, 4t cubed to t, g dot 1 is 4, 2, which is telling me here at one, I should attach in some way a vector which grows lengths 4 here and leaves 2 here. So this distance is 2 and this distance is 4. Oh, so that's where it is. Right. Okay, I thought it was just a tangent vector. Well, it is a tangent vector, but it's not only a tangent line, it's a tangent vector. That's right. So it has magnitude so that the slope here only tells me the ratio. Yeah. And in fact, if we just g dot of t is just dx dt dy, we know from the chain rule that dy dx is just their ratio. I always write it backwards. Yeah. The important one should not be given the important one. Can you say t of x, y over dt is just the magnitude of the change in x and y, like the distance you travel on. I just need to remember that it's the magnitude of that vector. Instead of, because this gives you like the derivative of the original equation and all of its little extra things of information, but if you just wanted the distance that you travel on with the line. I don't know. All right, let's look at the speed of what you did. Yeah, so. The speed of what you travel on. Okay, let's put everything the same x, do you want to say that? So no, that's good. So this gives me not only the slope, but it gives me the speed, like just a race, no one. Right, so this vector, so g dot t is the velocity vector. So sometimes this is, so it's the velocity, it tells me not only velocity is a vector, because not only do I have a speed, but I have a direction. And that's certainly important. And we could do this again. So we could take the second derivative, which is derivative, first derivative, which is a vector, remember. And this is the acceleration vector. So it tells me both the direction in which I have accelerating and happening. So in some sense there's not, there's not a lot, there's not a lot new here, except that we just think of everything in terms of vectors. So maybe just to do another example. So let's do something in R3 where everything is the same, except we have three dimensions. So say I have a function, let's call it h, t, which looks like a cosine t, sine t, t. So this gives me h for helix, gives me, I mean if you think about this part, the x squared plus the y squared is always one, which means it's going to go around a circle, but it rises at a constant rate. So this will give me, and of course I will forget which way it's going to go. It's going to start when t is 0 at 1, 0, which is here. So it gives me spiraling around the cylinder, rising at a constant rate. I don't know if you can see that picture. It's a little lopsided. It's good enough. I don't know if it's a good picture, but it's a good enough picture. So, right, you should think of a cylinder and then I have a curve winding around like a barber pole. So I don't know if you guys know about barber poles. They're really hard to have on them. I guess they happen, okay. You still see a few. Anyway, so wind around the barber pole. Here we can calculate the derivative vector, which from the picture is just going to be a vector here shooting off that way. Let me write it this way. It's the variation. It's just taking the derivative of the piece that this has a constant speed. So I'm playing with the speed. So what is it? That's curve choose 1. No, that's curve choose 1, which is 2, so raffle 2. Definitely. Oh my God. She's doing better. Yay. But of course the velocity vector twists around. The velocity vector rotates around the circle and rises. But it's length is always constant. Not to mention because I've just got so excited being able to do derivatives. I forgot to mention the continuity. I forgot to mention it. So suppose I have some parametric function, I don't know, F and T. What do you mean to say that F is continuous? No, that would be differentiable. So what would it mean in terms of a constant speed? There's no what? Oh. Right. So. No jumps. No. What about that? Yes. Yes. This is continuous because I just threw it without lifting up my hand. So yeah. It's continuous. I'm only looking at continuity, which means that if you move a little bit and, well, can't really just look in a picture. It's not continuous. The way I drew it, that's not continuous. Well, here. Here, let me do one that behaves like that. Let's see. How about we just do a line? How about this one? F of T is negative one over T and negatives a little up that way. Or, let's just do one over T zero, or T not equal to zero and it's zero if T equals zero. That's a perfectly well defined function, but it's graph, I mean, its trace is this way. Of course it should be black because it's positive. These values are T bigger than zero this way and these values are T less than zero this way and this is T equals zero. So this is not continuous, but it's graph is just a line. It's just a line in a funny way. So the picture you look at is continuous. But notice that we can take derivatives here. The derivative vectors point this way here and this way here and there is no derivative vector at zero, but this is not continuous because I can't drive in the way described by T without lifting up my pen. So what property would I want to know when something is continuous? Yeah? Sorry? Yeah, so the limit is the value. It's the same. It's exactly the same definition as you get in single variable calculus and say, oh, we don't care about continuity. We don't care about anything that isn't convenient. But, so F is continuous, let's say on an interval, T in some interval, let's make it open. AB, in fact let's just say FT. The limit as T goes to T naught of F of T is the same as F of T. Taking the limit is the same as plugging in. So here I can't get to this guy by a limit. Plugging in, I have a problem. It's continuous for positive T. It's continuous for negative T, but it's not continuous at zero because there's no limit. It's to this point. I have to take the limit as T goes to infinity in order to get this zero value. So this guy is not continuous. The trace in the plane is continuous, but it's not a continuous function because of this thing. I have to go back and fill in the dot. So that's what I forgot to say. So continuity is pretty easy for parametric equations because it just means this is the same. Each component function FI is continuous. The bits that describe it are continuous. It wasn't well and fine. Okay. And so differentiable, we can take a derivative. Certainly it has to be continuous to be differentiable, but also that limit has to be continuous. Another notion here, something subtle here going on, that we have a difference between differentiable and smooth because this function is not smooth. I can draw it in a differentiable way because I can come in, slow down and stop here and then turn around and my tangent vector can vary in a continuous way. Because I shrink my tangent vector to zero as I come into this point, tangent vector is zero and when it's zero it can turn in any direction. So there's a difference. This is not a very, this is not as useful as this. So we'll say that a parametric function F of t is smooth, is differentiable also, and also the derivative vector, let's write it this way, d of t, F of t is never the zero vector. Because it's a derivative vector is the zero vector, then I can make corner. If I come along and I stop and then I turn and go up this way, then my derivative vectors have varied continuously but this is not smooth because I stopped. So there was no vector here to pay attention to. There's a subtle difference here that we don't want zero derivatives. Another way of saying this is, this is another way of saying that there's a well defined tangent vector. And the zero vector is sort of not well defined because it could tangent make, let's say, let's say unit vector. Because if I have any non-unit vector I just scale it up to its size, except zero I can't scale up to size one. So if I have a curve here where I go fast here and slow there I can always re-parameterize it in such a way that these vectors are always advanced one. But if I have a corner like this, I can't re-parameterize here, going zero. And no multiplication or division will change zero into one. Zero is just zero, no matter how much I stretch it out. Okay? So, one thing. Now, so any questions about this? Calculating derivative vectors for parametric functions is easy. Just the same as doing other guys. But remember what they mean is really the point here. Should keep in mind that we have the same rules, similar rules. So let's think X, I have two parametric functions, X and T and Y and T. So these are functions. Should I call them F and G instead? Maybe I should. These are two functions. They're vector functions. Since they're vector value functions I can do vector stuff to them. So I can add them together that gives me a new vector value function. And when I add them together I could take the derivative. And not surprisingly this gives me just the sum of the values. And I could scale them not surprisingly to scale the derivative vector by that same constant. So here, see is any real number. So this is not a surprise. I could subtract and that's the same nonsense. This, I mean this is also not a surprise, but you have to prove it. This is not obvious. That if I have vector functions I can take their dot product. What will the answer be? What would you guess? So let me forget writing the T. You think it's that? It's not. It works just like a product rule. A product is truly a product and it follows the product rule. And I don't know how to do it. Is it that? It can't be that correct. Let me forget the of T. Dot it with G is the derivative of F dotted with G plus F dotted with derivative. So it works just like the product rule for single variable functions, but it's a different object. And you can check this just by writing down the term, taking the derivative. The same thing happens for the cross product. This is a vector. So think of the line here. These are numbers, right? This is a real value function and this is a real number. So this is good. Take the derivative of something which is a function and the R your answer is R and we expect that to be true. If we take the cross product, it works the same way. You get a derivative vector and you cross it with a function and you get a function and you cross it with a derivative vector. So this guy, well this only works in R3. We start with how much does a vector in R3 change? And we get with a vector in R3. So that makes sense. This is a vector. This is a number. The same thing is true if we compose. So I might have my function F of T, but now I'm going to stretch T. So phi is a real value function. I'm going to rescale this. And if I take the derivative, so this is really a chain rule. Well the chain rule works the same way. This will be a derivative function evaluated at a point. That's a vector. But that vector will be stretched by, so let's do a very easy example, example fast. Suppose alpha of T is T squared sine T. The derivative is just 2T cosine T. If I stretch T by 3, this will just be, well this is going to stretch. This is going to, this tells me that I take the derivative 2 times 3T cosine of 3T times 3T, which is a probably, yeah, usually 3 times 3T. Which if I do the composition here, if I take the derivative of F of 3T, this is 9T squared sine 3T. And when I take the derivative of 9T squared I get 18T and when I take the derivative of sine 3T, so the chain rule works.