 So, in the previous lecture we have started to delve into this topic of Eigenvalues and Eigenvectors without really defining what these are all about, but we have at least given you a motivating example in the form of dynamical systems particularly of second order differential equations. And we showed you that if you are indeed able to evaluate whenever these Eigenvalues and Eigenvectors exist if you are able to evaluate them so we assume them right that there are these numbers lambda 1 and lambda 2 in the field f right and there are these vectors v 1 and v 2 which under the action of a certain matrix does not get any rotation, but is only scaled up or down right then we will be able to very conveniently solve for this second order differential equation. And we showed you the way we depict such solutions in the form of phase port traits right. So, we showed you one kind of representation of this solution where we said that if this is x 1 and this is x 2 which are in control theory we like to call them the state variables. So, then we saw that if you have this as your v 2 and this as your v 1 right then starting from different locations different initial conditions such as this if v 1 corresponds to the slower Eigenvalue by slower I mean suppose lambda 1 lambda yeah lambda 2 if this is the case right just to this is a refresher of what we have done the previous day right. So, then v 1 corresponds to the Eigenvector for lambda 1 v 2 corresponds to the Eigenvector for lambda 2 right and we showed you that this is a distorted version of the nice the nicer picture that we had when we had them diagonalized right. So, something like this. So, you start from somewhere over here for instance you would end up like this start from somewhere over here you would possibly end up something like this. So, on and so forth you start up here ok. So, it would be something like this and so on right. Now, it turns out that the picture is not very different if the signs of these fellows were flipped in the sense that if you had on the other hand 0 let us see lambda 1 lambda 2. So, you see what happens then which is the faster Eigenvalue now again the one that is larger in magnitude because it is the exponentiation that matters right. So, this blows up at the rate e to the lambda 2 t and lambda 1 and lambda 2 both being positive what do you expect. So, as you go towards infinity which is what will happen if these are your Eigenvalues or these are the numbers associated with the solution right you will get e to the lambda 1 t e to the lambda 2 t which is the one that predominates the faster one of course right. So, as you go to infinity you will tend to do so along the faster Eigenvector obviously right. So, all that you need to do in this case is reverse the arrowheads right and that would give you the phase port rate for this. A little more interesting case is the following when you have lambda 1 is less than 0 is less than lambda 2 right. Now, things get interesting because now along one direction it turns out that there is a tendency to draw towards the origin along the other direction there is a tendency to draw away from the origin. If I look at a diagonal matrix like so, but this is lambda 1 and this is lambda 2 and this is 0 and this is 0 then of course, v 1 is given by 1 0 and v 2 is given by 0 1 right. Let me just call it e 1 and e 2 because v 1 and v 2 we have reserved for the original Eigenvectors. This is of course, after the transformation any 2 by 2 matrix we have assumed that there exist numbers lambda 1 lambda 2 and corresponding vectors v 1 v 2 such that it can be transformed to this form. Once you have transformed it to this form then these turn out to be the Eigenvectors or the so called vectors in this form yeah. So, which allows us to conveniently then draw it like this where may be what had we called it x 1 till day may be in the previous lecture x 2 till day. So, then this is 1 comma 0 and this is 0 comma 1. So, these happen to be the principle directions then the irrotational directions. So, what happens in this case you see along x 1 till day if you start from a point on this axis you will tend to move towards this whereas along this direction you will tend to move away from this is it not. However, what happens if you start from a point such as this not on the principle axis can you guess what will happen of course, there is some attraction towards the origin along this direction, but there is an equivalent repulsion along this direction right. So, what is the resultant I mean as I said treat this like the velocity right that is what your x dot is right. So, then what happens this is your resultant along one direction it is lambda 1 times x 1 along the other direction it is lambda 2 times x 2 right. So, what is it is going to lead to a direction velocity direction such as this. So, at every point if you know the velocity direction which is what we call the field the vector field nothing to do with the field and the vectors we have learnt in this course so far the vector field. So, what happens you can just catch it like this right yeah see if you are very close to this point on the other hand then what happens the magnitude of x 2 till day is very small. So, even though there is a tendency to go like this it is very small whereas the component along this direction is high. So, you can expect a resultant like so. So, if I now extrapolate this idea and if you permit me to erase this can I not say that the solution will be somewhat like this right. So, along one direction there is a tendency to go towards the origin, but eventually you never land up on the origin unless you start exactly on this axis. No matter how so ever close you start from this axis if you are not exactly bang on top of this axis you will eventually get carried away towards infinity. So, this is what we call a saddle a saddle point for this dynamical system. This was a stable node if you reverse the arrow head it becomes an unstable node this is on the other hand a saddle point of course why is it called a saddle it is you know just maybe an artist's imagination. So, saddle the English word is basically something to do with a something you put on a horseback. So, if you remember what you put on a horseback the saddle the shape of the saddle is somewhat like this right excuse my drawing, but right. So, of course I am taking artistic liberties here, but ok. So, the dotted line would mean this is yeah this is a saddle. If you think about it if you let a ball lose on this this is the direction of the horse. So, this is the direction of the head of the horse is the tail of the horse we let a ball lose along this little roll by just fine except for the fact that if you are slightly off the midline right you would roll off. So, there is exactly one line a thin line along which if you tread you will exactly land up at this point which is the minima of the equilibrium. On the other hand if you are slightly off center if you if you take a top view of the horses back and if you are slightly off center you would get carried away either on this direction or on this direction. So, this is what happens on a saddle on a horses back right. So, that is exactly what is happening here. If I now draw this in the original coordinate system it would of course look like a distorted version of this because this is where the vectors are really nice and orthogonal, but in the original matrix the Eigen vectors or these vectors v 1 and v 2 need not be orthogonal. So, let us say this is v 1 and suppose this is v 2. So, of course v 1 corresponds to the direction corresponding to lambda 1 which is stable. So, you will see that it would be something like like this yeah. So, this is again. So, along one direction it gets stretched because now you do not have orthogonal vectors v 1 and v 2 unlike e 1 and e 2 which were orthogonal in that nice form right. But you see the picture still remains very much the same in nature right yeah. So, this is how the typical solution along a saddle point would look. So, at least for planar dynamical systems you will agree that if we understand this idea of what kind of numbers lambda 1 and lambda 2 allow us to meet this condition that a acting on some vector leads to just a scaling of that vector is useful in solving such differential equations. When we say solutions again to reiterate by solutions we do not mean just close from solution this is as good as a solution as any right when you say solving a differential equation this is a solution of a differential equation ok. You pick your initial condition and from there due to the uniqueness of the solution you can just go ahead and draw the trajectory right this is on the phase portrait right. So, now of course, there are several other kinds of possibilities we will not get into that because those will involve when these lambdas are not real you give me a real matrix, but the lambdas need not be real those situations do arise. Just as an instance you take this I mean although I am going to do it a little more formally, but you already probably know how to solve for Eigen values and Eigen vectors. So, I will just give you this example to work on while right take this as your A matrix and think of ways of solving for Eigen values and see if you get real values for those Eigen values ok. If you can get real Eigen values the point is so far we have assumed that we will be able to get Eigen values, but what if it is not the case or at least even if it is not the case can we at least come up with some condition when we will surely be able to obtain right. It is on our wish list that no matter what A matrix is given to me if I am to solve for x dot is equal to A x it is on my wish list that no matter what A is given to me I will always be able to find this v 1 and v 2 and lambda 1 and lambda 2 such that A acting on v 1 gives lambda 1 v 1 A acting on v 2 gives lambda 2 v 2 why is it on my wish list because I have seen precisely the reason for this I will be able to easily sketch this and be able to talk about the solution of that differential equation, but we cannot leave things like this to chance and to our wish list right we have to explore whether indeed there is any point behind having such a wish we can wish for a lot of things does not mean it turns true right. So, under what circumstances will we always be able to do this yeah. So, let us see what we are essentially asking for is on the abstract vector space finite dimensional now that is very important ok. We are not going to talk about infinite dimensional because in infinite dimensional vector spaces if you are talking about mappings or linear operators on infinite dimensional vector spaces it turns out there do exist such operators which may have no eigenvalues whatsoever. So, whenever we are talking about the existence of the eigenvalues all right we will confine ourselves to finite dimensional vector spaces ok. So, you have an operator linear operator acting on v to lead to another object in v clear. Now the way we have defined this object or the way we have carried out our listing our wish list and all what we would desire is this right. So, if we have lambda belonging to the field all right what is the field the field is the one over which this vector spaces described such that there exists v in v satisfying phi acting on v leads to nothing but lambda scalar multiplication with v all right. Then lambda comma v is said to be an eigenvalue eigenvector pair for phi, eigen is a German word which means it remains the same ok does not change ok. So, it is an eigenvalue eigenvector pair the lambda is the eigenvalue the v is the eigenvector of course, you all already probably know about this from your earlier dabblings into matrix theory and other things, but now we are going to define this over operators, but really is there much of a difference. After all we are talking about finite dimensional vector spaces and we are talking about linear operators on them just assign an ordered basis and all you will be talking about our matrices throughout. So, as per this description or definition if you would what we are asking for is the following that we have phi acting on v is equal to lambda times v right. Now let us say we assign some ordered basis. So, b is an ordered basis for v ok. So, we suppose of course, this will exist because it is a finite dimensional vector space. So, therefore, this is one the same as writing phi v's representation under this basis is equal to lambda times v's representation under this basis, but what is this going to be equal to? Is this not the same as the representation of phi under this basis acting on the representation of v under this basis which is the same as lambda times and you are back to the domain of Euclidean spaces or things similar to that if f is r it is exactly a Euclidean space for some r to the n. Yeah if it is not if it is let us say c something else then at least it is going to look like a matrix yeah no doubts about this. Now let us say you are talking about r or c for our purposes we will be focusing on r or c yc that will be clear actually I have dropped a hint about yc just go through that example I have just talked about ok. We will revisit it, but nonetheless for the time being what does this mean does this not imply that phi b minus lambda times the identity matrix yeah this whole thing acting on the coordinate representation of b leads to 0. Let us just give it a name and call it a and be done with this box notation and yeah so let us just say this is a minus lambda i acting on what is this equivalent to in other words what are we actually trying to solve if you want to find out yeah sure we want to solve for a kernel, but kernel of what exactly it is an again chicken problem is it not apparently there is one equation and we want to find in one shot for both lambda and v so we have to split it up in such a way that it makes sense if you are trying to find out the kernel my question to you would be how do you know what lambda to search for there is so many possibilities for lambda right you are right it is a it is something in the kernel yeah of course otherwise it does not make sense that is a non trivial so that is why it is a non trivial kernel yes you want a minus lambda i to lose rank yeah so you want something to be in the kernel v is equal to 0 is obviously going to be that in the kernel of everything whether it is invertible or whether it is you know non invertible irrespective of that but the point is what are we going to solve for how do we tackle this problem we have v and lambda both of these to deal with so we will deal with something now that hither unto we haven't delved into much deeper we will not talk about we haven't and in the future also we will not dig too deep into it but for this one particular topic we will talk about things like determinants and you have to indulge me a bit here see what this means is that there is some linear combination of the columns of this object that vanishes some non trivial linear combination as your friend has pointed out yeah so yeah maybe I should point it out here itself that this is not equal to 0 then we are done with it once and for instead of imposing it there right yeah so if this object is indeed singular or has dependent columns you remember determinant I am not going into again not going into the depths of determinants but if the columns are linearly independent I mean linearly dependent on one another can you not 0 out a particular column entirely when you take determinant operations right you subtract some scaled version of a column from another column and by doing this operation sequentially can you not 0 out an entire column because that is what the linear combination equaling 0 means right so that means the determinant of this object has to be 0 and that is independent of v see if you are trying to find out v you need to know what lambda is but what I am saying is if you want to find out lambda you do not have to bother with v v there is this other tool that I have just plucked out of thin air almost which is the determinants right because the determinant allows me to evaluate lambda without worrying about what is that v but if I started with trying to find out what is v I would be faced with the proposition of first telling you what the lambda is. So, this gives us a way out of the chicken and egg problem right so what we will say is that let us try to figure out what is the determinant of a minus lambda i equal to 0 solve for okay but now here is the interesting deal what is the guarantee that such a solution will exist for instance now we will look at this matrix so look at this matrix what do you think this matrix is entries the entries of this matrix where do they come from no no I mean what field do they come from real are you sure you want to stick around with real yeah complex so complex is of course an extension of the real field so you want to stick around with complex why here is why because if you want to find out this let us try to see this so this is our a so a minus lambda i is equal to minus lambda minus 1 1 minus lambda the determinant thereof yeah so what is this lambda squared plus 1 now if you want lambda squared plus 1 is equal to 0 you are led to the situation where lambda is plus or minus i now if you had stuck with this as a real matrix matrix those entries are real then you will have the situation where no eigenvalues would exist yeah if you consider this to be a real matrix then it has no eigenvalues over the real field but if you go over the complex field which is of course an extension of the real field then of course you have eigenvalues and that is fundamentally the property of the complex field it is so called algebraically closed what is at the heart of this what do you mean by algebraically closed you take any polynomial whose coefficients come from that particular field and the roots of that polynomial must also belong to that field that is the definition of an algebraically closed field so where does polynomial come into this picture of course if you look at the determinant of this this is always going to be a polynomial so that is the key observation this this object over here is a in fact it is more than just any polynomial it is a monic polynomial is what we will call it so what is a monic polynomial where the coefficient of the highest degree term is unity all right if it is not unity if it is allowed to be any number then you can also make it 0 and then the degree drops right so because it is monic it is guaranteed to be of degree n when the dimension of the vector space is n this polynomial yeah so let us impose this condition now that dimension of v is equal to n then this is a monic polynomial of degree n right and now if you are taking a monic polynomial of degree n and if you are taking the matrix no matter whether its entries look like real numbers or not you just assume that the field you are working with is the field of complex numbers and then because of the algebraically closed property of the complex numbers you always end up having n eigenvalues all right so this is the guarantee that over finite dimensional vector spaces every linear operator you know over the complex field over an algebraically closed field in fact not just the complex field any algebraically closed field any field if I tell you it is an algebraically closed field it is guaranteed to have these eigenvalues okay so existence of eigenvalues is at least established right but that is not enough what we did was something more we did this diagonalization of the matrix through some transformation and that diagonalization was only made possible not just by these eigenvalues but a very crucial role was played by the eigenvectors because when we stacked up these eigenvectors side by side they in fact gave us this particular transformation so we need some special properties of those eigenvectors as well the existence of mere eigenvalues will not guarantee that we will be able to come up with such a transformation okay let us just take our next example to motivate this now that we know that eigenvalues would exist let us take this example 2 1 0 2 again let's say this is over complex numbers whether you take real or complex in this case won't matter so what do we do we take the determinant of 2 minus lambda 1 0 2 minus lambda right so this is going to be just 2 minus lambda whole squared is equal to 0 which means lambda is equal to 2 comma 2 right so now let's test our understanding through this example we'll also show you how to get those eigenvectors if possible okay so what we have here is a situation of repeated roots of that monic polynomial by the way that monic polynomial which we've just described to you has a name it's called the characteristic polynomial okay chi of a matrix A x is equal to determinant xi minus a and you create it to 0 whether you take A minus xi xi minus a doesn't matter you know it's just a sign right so this the solutions of the characteristic polynomial are exactly the eigenvalues whenever such solutions exist over the field in question you're guaranteed to get your eigenvalues so if you choose a complex field even if the matrix looks like a real matrix you'll always get your requisite number of eigenvalues which is equal to the dimension of the vector space right so existence of eigenvalues is done and dusted but now let's focus our attention back on this what are we saying now we know the lambdas so all that we need to do now is evaluate the v's the eigenvectors okay let's call them v1 and v2 so v1 v is equal to v1 so one v1 will come from two the other v2 will also come from two let's see if we can find two different v1 and v2 for this which would serve our purpose right so what we have is two one zero two times so let's call this v1 as v11 and v12 is equal to the eigenvalue times two v11 v12 right now what happens this is two v11 plus v12 is equal to two v11 the second one is two v12 is equal to two v12 what does that tell us yeah sure rank is reduced but how do we solve it anyway we could have pulled it on this side as well just some tomfoolery problem doing you could have just said zero one zero zero yeah that's in fact what the row reduced echelon form so what is the solution to this yeah of course but what is the solution so what is v1 going to be equal to look yeah yeah no so what is this going to look like what are the two tuples here yeah first one is so v11 can be any arbitrary value right yeah v11 can be any arbitrary value but v12 this is not really adding anything really is it and v12 has to be zero because from this one itself you see v12 is zero yeah so this must be zero and this you just call some v11 but hang on we have two eigenvalues is it not so we should have expected two different solutions do you think that when you write it in terms of v21 and v22 this will look any different no right it's still going to look like the same so we're going to end up with the same solution so how many eigenvectors are we getting them we have two eigenvalues at lambda is equal to two but we only end up getting one eigenvector yeah will that allow us to diagonalize this the way we have highlighted or the way we have outlined the diagonalization process in the previous lecture it seems like we are stuck right this is precisely the problem so however there is one crucial issue I must point out when even if you have repeated eigenvalues here like this isn't the existence of at least one eigenvector guaranteed let's say you had a 30 cross 30 matrix here in which let's say there are one there's one eigenvalue which has 15 repetitions you may not end up having 15 eigenvectors for those 15 repetitions of the same eigenvalue but will you not at least have one eigenvector corresponding to one eigenvalue why what is the argument well it's very fundamental you see you have to have a minus lambda i have a non-trivial kernel for the existence of the eigenvalue itself a minus lambda i the determinant must vanish if the determinant of a minus lambda i vanishes it means a minus lambda i is singular if a minus lambda i is singular it means its columns cannot be linearly independent and therefore there must be something non-trivial in the kernel right so at least for every distinct eigenvalue you must have at least one eigenvector maybe not as many of copies or as many distinct eigenvectors as the number of repetitions of the eigenvalue but at least for every eigenvector value there must be one eigenvector right so that part of the existence is done so for every distinct eigenvalue let me repeat this it is this part from here there exists one eigenvector least so if you are dealing with an algebraically closed field you will end up having exactly n eigenvalues some of which may be repeated some of which may be distinct but no matter whether they repeated or distinct if you have r distinct eigenvalues you may not have all n distinct right you may have r distinct eigenvalues in which case you are going to land up with at least r eigenvectors one for each at least yeah so that much is at least guaranteed so at least if someone asks you whether how are you sure so far we only assumed in the previous lecture that we have this condition but it turns out then that from here you will have if they are all distinct eigenvalues you will in fact have one eigenvector for each of them the problem now is somewhat different are these eigenvectors going to be linearly independent right so that is something which we shall tackle in the next module.