 So, in the previous lecture we have seen these structures called fields right and we have seen what sort of properties they must satisfy. Before we move on to other topics let us just give you a couple of quick exercises to think about. Suppose we have a field we have seen that you can have real numbers which satisfy the properties of field with the usual addition and multiplication. You have complex numbers these are both examples nonetheless of infinite fields. However, you could also have examples of finite fields where n is prime right with the modulo operations. So, I just ask you to check as part of an exercise that if you have a field like so where or rather a supposed field like so I should say where d is square free over f and f is a set that looks something like alpha plus beta square root d with alpha beta belonging to the field. So, I urge you to verify if this is also going to be a field where this of course inherits the addition and multiplication operation from the original f. So, you have alpha and beta which are members of the original field f and now I am taking up a new set that looks something like this and I am asking you to check if this set if it inherits the same multiplication and addition as defined in f is also a field. So, we will have a few more problems for you to solve in the problem sheet, but it would be nice if you can try out this exercise before attempting that try to convince yourself whether this is a field or otherwise that is one point. The second thing that I would like to point out is also underscores the importance of why we studied fields, but it will come in the second part of the course much later. You will see that with the Eigen value Eigen vector problem will come the associated problem of diagonalizability of a matrix which is to say that if I give you a matrix a square matrix which is let us say over a field n cross n. So, that is basically n squared entries each of the entries coming from a field right can we write this in a diagonal form provided we pluck out. So, let us ask the question does there exist p also coming from f n cross n such that p inverse a p is this with being a diagonal matrix okay. So, this is an important question it will not have a general answer unless I tell you what is an what is the a matrix that we have just to motivate this line of thought let us take up an example and we shall see immediately where we could possibly run into trouble. So suppose a is just a 2 by 2 real matrix okay let us say a is given by 0 1 minus 1 0 right. Suppose there exists p such that p inverse a p is diagonal for p being a real by real 2 by 2 matrix okay. So, what is the potential problem that we might run into let us see at each step we shall try to do some legitimate operations with matrices as you understand them. Now what we have been given is that p inverse a p is equal to some diagonal matrix real diagonal matrix alright that means this must be equal to some d 1 0 0 d 2 where d 1 and d 2 are real numbers right please ask if at any point you have any doubts we will try to clarify but now notice that what about p inverse a p so observe a p inverse a p times p inverse a p is equal to square of this diagonal matrix which is nothing but d 1 squared 0 0 d 2 squared but I can open up these brackets and what does the result what does that result in this p and p inverse in the middle will end up pulverizing each other into the identity by our very definition which means that I have p inverse a squared p is given by this but what about a squared what does a squared look like so look closely at a squared what does a squared look like quick guess sorry identity sure is it the identity then so skew symmetric but what are the entries is it skew symmetric I doubt it negative of identity absolutely a squared is just the negative of identity which is minus I what does that tell us what can we say about this then this is p inverse I can obviously say negative of identity means I can pull out the minus sign outside so this is nothing but minus p inverse p is equal to d 1 squared 0 0 d 2 squared is it not in other words I have if you allow me to erase this part and continue from here on this essentially means that I have minus 1 0 0 minus 1 is equal to d 1 squared 0 0 d 2 squared does there exists real d 1 d 2 such that d 1 squared is minus 1 and d 2 squared is minus 1 of course not right so if you are searching for diagonalizability over real matrices then this matrix would fail to lead you to such a solution with some such p inverse AP now why this p inverse AP why this particular form of diagonalization that will be clearer when we deal with the eigenvalue eigenvector problem much later but at least you understand that this problem is not solvable in general over any field in fact such fields have a special name over which we will see later if you cook up polynomials whose coefficients are from a field such that the solutions of the polynomial must also belong to that field such fields are called algebraically closed it so happens that complex numbers the field of complex numbers is algebraically closed but the field of real numbers is not algebraically closed all right so therein lies the problem okay but there could be other reasons several other reasons which would preclude diagonalization or diagonalizability of a matrix as we shall see much later again so a lot of promises for later but at least you understand hopefully that no matter whether we are looking for the second problem that is that of eigenvalue eigenvector or the first problem which is solving x is equal to b a deep understanding of the field with which we are dealing is very important okay so you must always specify what field you are working with and depending on that a lot of things would change whether this matrix is for instance diagonalizable or not rests on the question as to what is the field over which you are considering this matrix to be now notice the field of real numbers is a sub field of the field of complex numbers so I might very well have said consider this to be a complex matrix matrix in C2 cross 2 instead of R2 cross 2 in which case your answer as to whether this is diagonalizable would have changed completely you would have said yes it is diagonalizable there is no problem to diagonalizing it if you allow me all sorts of complex values then of course you can have d1 and d2 right as I the imaginary I and then it is there is nothing that prevents it of course how to find that p that is again something we will find in the eigenvalue eigenvector problem and its solution right so therefore the field over which we are working is of paramount importance which one this is a big lambda all right so having convinced ourselves about the importance of this object that we have learned in the previous class called fields let us move over to more important questions which allow us to abstract the sense of what we asked in the form of matrices let us take an example so suppose I have a function of time given by 2 e to the minus 2t plus 5 e to the minus 3t plus 7 e to the 4t and f2 of t is equal to 6 e to the minus 2t plus 4 e to the minus 3t plus 8 e to the 4t and let us say f of t as given by 9 e to the minus 2t plus 9 e to the minus 3t plus 12 e to the 4t okay suppose I give you these 3 functions and I ask the question as to whether this third function f of t is a linear combination of the first two functions okay so then you might ask this question in the following form can I write f of t is equal to c1 f1 t plus c2 f2 of t of course you should ask me what is the field over which you are taking this linear combination in other words this c1 and c2 let us say real numbers yeah so does there exist c1 c2 which are real numbers such that this is true and the important part the most important part that you must miss here is for all t so you cannot possibly come up with some particular choice of c1 c2 and solve for some t and say oh hang on this is a solution for t no that solution must work for all t because otherwise it becomes like you are solving some equation in t that is not what I am asking for okay it turns out that this question fits into the general framework of what we been trying to answer that is ax is equal to b once you see what is going on once you see the pattern once you see the similarity in order to view this problem by the same token as we have viewed ax is equal to b and its solution we need a more general structure okay I have motivated that in the previous lecture somewhat through a picture in the Euclidean space in R3 where I said that look at this question where you have say this is x this is y this is z yeah and suppose you have a vector view on here and another vector say v2 here and the question that I am basically asking when I say that I am asking you to solve for the following equation that is v1 v2 remember these are how many tuples these are in R3 so these are three tuples so when I write a matrix like this what is the size of this matrix 3x2 right and I am asking you to solve for this when this is the situation okay now of course you see the point that I am making or the structure that I am trying to show you here okay this is exactly what we been dealing with so if you look at the column picture of ax is equal to b this is the kind of question we been trying to address all along so can you write this third vector as a linear combination now do you see the pattern of the question that we have been trying to answer so in order to address all of this within some common framework we are now going to define something important okay for that there are the following things that we need what do we need one we need a field say f okay we need a set of so-called vectors why I put it in inverted commas is the fact that this is not the vector that you understand in physics with some magnitude and some direction it is just a set of vectors okay so we need the following objects in order to define what we are trying to define which is we need a field first let us say it is f we need a set of vectors a collection of objects that we call vectors defined by the set V we need an operation called the vector addition which maps two objects picked from the set of vectors to the set itself yeah so such an operation which is an a billion group okay what it means is it satisfies those properties that you take any two objects from that set of vectors perform this addition what you get back must also belong to that set which is the property of closure it must also have associativity so you take three objects it doesn't matter in which order you add them okay you can put the brackets in any way you like so that you can actually do away with the brackets without any loss of I mean explanation or without any ambiguity third you need an additive identity right so you need some zero element with respect to this addition operation such that which when added to any object in this set of vectors gives you back that same vector you also need an additive inverse such that which if you add to a particular object gives you back the additive identity right and finally the addition should be commutative so you need these properties in the addition apart from this of course this doesn't seem like much it seems like okay anyway what's even the role of the field in all of this right this is just like an a billion group V and plus would have suffice but then you have this you have this multiplication operation which takes an object from the field and an object from the from the set of vectors that is V and maps it to another object in the set of V so that it's closed the closure is a foregone conclusion and what are the properties that this what we call the scalar multiplication must satisfy such that a you must have so of course when you have the field you have the multiplicative identity there such that you pick up the multiplicative identity from the field let it act on any vector and it will give you back the same vector so this is true of all vectors V that you can pluck out from the set of vectors called V the first property alright so you pick out this from the set of scalar scalars or F this from the set of vectors V please note that there is no point in talking about commutativity the two objects are coming from two different sets I mean if you are just talking about commutativity as in alpha times V is equal to V times alpha yeah sure that's there but when you only talk about commutativity it means you are picking out objects from the same set here you are not even doing that so that's why we don't talk about commutativity of this operation separately it doesn't make sense like that but yeah if you are saying 1 times V is V times 1 yeah sure that you can do it's it's again without any loss of generality you can do that there's no room for confusion over there right the second property you take alpha and beta from the set of scalars and you do the scalar multiplication it's the same as if you are taken alpha out and use a scalar multiplication with beta times V so you see this is non-trivial let's just say for all alpha beta coming from the field F and V coming from the set of vectors V what does it mean this is important the operation that's happening here is not the scalar multiplication this is the multiplication happening as defined in the field okay I know this seems like it's trivial it's obvious there's nothing much to it but remember this it's important to observe that the way we are defining it it is important to know that this is a property that we are invoking or imposing it's not something that's very obvious every once in a while we might just think of these things as things in Euclidean spaces and tuples of numbers and we take it for granted but this is an axiom of the vector space it sort of tells you that this must be true okay in other words this operation being done in the field first and then the resultant being done a scale up I mean if you perform a scalar multiplication with the resultant on a vector it's the same as if you are done a step of I mean two steps of what is this scalar multiplications first a scalar multiplications like so and then followed by another scalar multiplication like so okay now of course you know that in the field this commutes so whether it's alpha beta beta alpha matters not right so again this is again without any ambiguity right what is the third property any guesses wherever you have two operations there must be some something that's there in the interplay of those two operations so you must have distributivity but you need two kinds of distributivity here it turns out which is to say that if you have what alpha plus beta times V that should be equal to so this is where the scalar multiplication comes in here this edition is happening in the field right it's important to note that on the other hand what's happening on the right hand side is notice every operation here is the operation is in accordance with the operation that we have now defined here this operation is something that is already predefined for you right this was the operation that's there in the field this operation is the newly defined operation according to what we have said here and this operation and this operation are also the scalar multiplications that we have defined here we are defining here this edition is the edition operation that we have defined here with respect to the set V, right. So, this plus and this plus are different operations. This is adding two members in the field, this is adding two members in the set V, right. So, that is an important distinction to make. At the same time, you have, okay, so let us say for all alpha, beta belonging to the field F and V belonging to V. And the fourth and final property is alpha times V1 plus V2. So, again here is the scalar multiplication, here is the vector addition. Both of these are the operations that we are defining here. Another right hand side you have alpha acting on V1 plus alpha acting on V2 for all alpha in the field F and V1, V2 in the set of vectors, okay. So, if you have a field like so, where and a set of vectors like so, so you have F, V and two operations that meet all these conditions, then we say the following that VF plus dot is a vector space. That is it. This allows us to address a whole bunch of problems that do not look anything like the AX is equal to B that we have seen so far in an analogous fashion, okay. Note that when from the context this plus and multiplication, this scalar multiplication and the vector addition are obvious or well understood, we simply say V over F, all right. Very straightforward example, any field over itself is a vector space with the usual addition and multiplication defined on the field. So, any field over itself is a vector space. Please verify that. I mean there is hardly anything to verify. You just have to check that when you have these, whether you have these properties with the addition and multiplication operation as defined in a field, right. You have more but all you require are these properties. So, any field over itself is a typical example of a vector space. What else? We can think of many. So, I will give you a couple of examples now or rather maybe one example where suppose S is a set and F, S, F is a collection of functions from S to F, right. So, that means this is the domain and it maps every object in S or some objects in S to F. That is what each member is, right, okay. Then you look at, so this is a set, all right. So, look at F, S, F over F. Of course, I need to define what additions are. What are those additions? So, you take F1 at some point S1 plus F2 at some point S1 is equal to F1 plus F2, okay. So, this is basically the opposite of the definition. So, this is what I am trying to define. So, this is defined in this manner. This is the addition and the multiplication. Suppose I have defined as alpha times Fs or rather alpha of F because this is what the multiplication operation is. It is just defined as the scalar alpha times Fs1, okay. Yeah, for all s1 belonging to S, for all s1 belonging to S. So, that is why I am defining the scalar multiplication and the vector addition, okay. The moment I give you some more concrete examples of the set S, perhaps things will be a little clear but we will do that in the next module.