 Keep going a little back and forth with this we started with arbitrary vector spaces actually we started with matrices then we went to abstractions thereof by considering arbitrary vector spaces then we introduced the notion of coordinates and showed that after all all these objects can all be mapped if they are finite dimensional they can all be sort of seen as similar as the structure preserving one to one correspondence through this assignment of coordinates and then we said okay let's investigate the matrices a little closely now again we'll go back and forth so it's like there and back again like Bilbo Baggins okay so what do we do now we say that well look since this matrix is really nothing special in just the n tuple when you're looking at vector spaces the n tuples then it is this matrix that captures a certain type of operation on objects in n tuples so can we not think of something similar that carries out something some similar sort of operation on arbitrary vector spaces in other words now we want to abstract or get a more abstract notion of what a matrix does to n tuples yeah and we want to cook up something more genetic of which probably the matrix will be a special case so two of the key properties of matrices that we have seen are that they are linear in some sense what is that sense we say that you take this fn and you take v1 and v2 from fn and if you hit it with a matrix say a which belongs to fm cross n if you hit v1 plus v2 then you can actually write this as a v1 plus a v2 yeah apart from that there's also this other property that if you have a acting on some c times v1 then it is written as so in this manner right so basically this matrix is taking objects in n tuples and mapping it to objects in m tuple note that we have very frivolously used this notation plus here but what's actually happening is this addition is happening over n tuples of course they look similar so we don't even worry much but this addition is happening over n tuples and this addition is happening over m tuples but what this allows us to do is to say that it doesn't matter whether you first add those two fellows as n tuples and then operate the matrix on it to get an object in fm or whether you act on them individually convert each of them to m tuples and then use the notion of addition as defined in the set of m tuples yeah so this addition notion of addition carries over now when you're looking at n tuples and m tuples you say okay that's just a padding I mean the idea of additions are very similar you just take n you just take m what's the big deal but when you're talking about arbitrary vector spaces where you don't have fn and fm but rather v and w and objects that map from v to w then the notion of addition vector addition in this vector space v and that of addition in vector space w may be quite different is it not of course they have to be over the same field but the the additions may be defined in different manners but if you can cook up something that inherits this property or that carries forward this property of a matrix then it seems like doesn't matter what the idea of addition in v is and what it is in w it doesn't matter how you act on them whether you first add them in the parent vector space in the domain v and then hit them with this a or whether you first act on them individually and convert them to objects in the co-domain so what I'm saying is if now this is my v and this is my w and suppose I have a mapping phi which takes objects in v and maps them to objects here in so suppose this is v1 and this is v2 so this is say w1 and say this is w2 all right what I'm saying now is that if you want to cook up a new variable which is v1 plus v2 suppose this is v hat is equal to v1 plus v2 this maps to some object here w hat okay the claim is that this w hat has to be nothing but w1 plus w2 yeah you see what's going on if there is an there is a mapping that sort of preserves the nice property of a matrix of what a matrix does on n tuples and m tuples then this is what we would expect it to do I mean if you just take the sum of these two fellows in v itself according to the rules of vector addition in v you'll get v hat then you act on it using this mapping phi and you will get some w hat here so this phi maps v hat to w hat on the other hand if your friend decides no I'm going to just hit v v1 with phi so I'm going to map v1 to w1 here first and v2 to w2 here first and I like the addition operation in w so after having obtained w1 and w2 here I'm going to add them according to the rules of vector addition in w your friend will also end up with the same object you will end up with w hat as a map of this v hat your friend will obtain w hat as a sum of w1 and w2 in w see the difference yeah this is very important to observe otherwise these things do not often appear like very different but when you look at different vector spaces so this phi is a mapping from v to w okay and okay just before I make that point about phi and erase this let's also point this out here this multiplication is happening in fn but here this object is already in fm so this multiplication is happening in fm exactly the same thing holds for multiplication as well so now I can get rid of this matrix and let's take the general what we call a linear transformation okay so okay let me erase this suppose v and w are vector spaces over f okay then this phi then phi which is a mapping from v to w is said to be a linear transformation if one phi acting on v1 plus v2 is the same as phi acting on v1 plus phi acting on v2 once again this addition is according to rules of w this addition is according to rules of v maybe I should use a colored chalk so this addition is happening in v this addition is happening in w right and secondly i acting on c times v is equal to c the scalar multiplication thereof of c with phi v once again this is scalar multiplication happening in v this is scalar multiplication happening in w of course where are these v1 v2 and v coming from so this is for all v1 v2 coming from v and this is for all v coming from v and c coming from the field so you need the same field of course because if they are not defined over the same field because this is a scalar multiplication over w scalar multiplication over v if they do not if they are not defined over the same field then it does not even make sense yeah so if these two properties are true of any mapping from v to w then we say that such a mapping is a linear transformation that's the definition right furthermore if w is equal to v then we say phi is a linear operator or an endomorphism okay so when it maps from v to itself if it's a linear mapping from v to itself it's a linear operator or an endomorphism that's just a name you can often encounter such alternate names in different contexts so just don't get scared through that term endomorphism all that's being said is it's a linear mapping from v to itself okay so let's see a few examples of course the first example is the trivial case of matrices because of course f to the n f to the m these are all vector spaces so let v be equal to f to the n w be equal to f to the m then a of size m cross n over the field f is definitely a linear transformation has to be because that is what inspired us to go ahead and define this we have already seen that these properties are true of matrices right okay so let's take some examples so suppose suppose now we have v is equal to f to the n w is equal to f to the m and phi of x sorry phi of x is given by a so phi phi is basically let's say phi which is a mapping from v to w maps what maps x to ax for a belonging to let's say small x yeah a belonging to f m cross n that's a first example a second example would be let's say v is the space of okay let's say I'll just write phi so that'll be easier so phi is a mapping from let's say the space of symmetric matrices real symmetric matrices to real symmetric matrices okay such that you push in a symmetric matrix p of size n cross n and phi gives you a transpose p plus p a for so let's call this phi a sum a which is let's say yeah r n cross n you can go ahead and check that this is a linear you know it's a linear operator in fact it's an endomorphism you can just go ahead and check yeah this is symmetric right so it takes us takes a symmetric matrix p and if you look at this then what is the transpose of this a transpose p plus p a it's the same thing so it's a symmetric mapping no problems with that right let's say so suppose we take phi as a mapping from what sort of a vector space okay let's say r m cross n n so let's say this phi a b r m cross n to r m cross n where it picks up objects here so this is x of m cross n matrix and maps it to a x b where a is some m cross m matrix and b is some n cross n matrix so when I'm giving you these examples what I expect you to do is to check not just believe me what do you do to check you take two arbitrary fellows here and in fact would you believe it if I tell you that the check for this is exactly very similar to the check for a subspace yeah because there are two properties you just take c times a vector or alpha times a vector plus the second vector and c that if it gives you alpha times the operation plus the other operation which I which is by which I mean that you take two arbitrary objects v1 and v2 from v and then you check if so check if phi alpha v1 plus v2 is indeed equal to alpha times phi v1 plus phi v2 for all for all alpha in the field and v1 v2 in v that's just the check in each of these examples just try out that check if this checks out its linearity you see you put this one zero and you've checked the scalar multiplication you put this alpha as one you've checked the vector addition so it's the same check there you check the closure here you check if this is indeed true yeah so you for example here you pick out some p1 and p2 which are both symmetric matrices and then you take alpha p1 plus p hat as alpha p1 plus p2 and check that if it is indeed turning out to be like so right so if you hit if you had fed in p hat here let's let me just do this as an as a check so that you can try out the rest so consider p hat is equal to alpha p1 plus p2 right so what is phi of a on p hat this is a transpose alpha p1 plus p2 plus alpha p1 plus p2 times a yeah because this is what p hat is so this is equal to what because of the way in which you matrices and their multiplications and their products get added you can obviously write this as alpha a transpose p1 plus alpha p1 a just rearranging the terms plus a transpose p2 plus p2 a but then clubbing these terms together what is this this is just phi of a acting on alpha p1 but you can also stick this alpha outside right so you can just pull it outside plus phi of a acting on p2 it's because the rearrangement and all is so simple that this is definitely a linear operator right so you can go ahead and check it for all of these examples that I have given you and convince yourself that indeed and that will also give you some practice on how to check whether something is a linear transformation or a linear operator right so this is an important class of transformations or operations that you can see which is a linear transformation or a you know from arbitrary vector space to arbitrary vector spaces right much later in this course not much later I mean a few lectures we will see that each of these operations that is this phi can be captured through some matrix okay and again the secret ingredient to that would be the assignment of coordinates right so every vector space that you have get a basis for it an ordered basis for it and then every vector in that vector space gets mapped yeah to some n tuple right so then all that it boils down to is some finite dimensional vector space to another finite dimensional vector space means mapping from some n tuple to some m tuple for some values of n and m where n and m just happen to be the dimensions of v and w and it turns out that well matrices are definitely a class of linear operators or linear transformations but matrices are all that there is when you are dealing with finite dimensional vector spaces there is nothing beyond it that is not so obvious but that is what we will see after a bit of basis constructions and seeing how these linear transformations are represented in terms of basis and change of basis we will see that it is in fact just matrices and matrices alone that capture whatever is being done through these linear operations here all right so before we close today's lecture since we have already introduced the idea of linear transformations I might as well tell you or talk a bit about certain properties of these linear transformations but these are not necessarily properties of transformations or linear maps defined over vector spaces rather as it turns out these properties are also true of mappings from one set to another and you have heard of these properties before okay so to be precise the properties I have in mind are the following first it is known by many names a one-to-one mapping or a monomorphism sometimes it is also called a monic mapping very rarely though and also an injection yeah or an injective mapping yeah and the other type of mapping that we shall concern ourselves with is onto or an epimorphism or an epic mapping or a surjection right and then you have the type of mappings which satisfy both of these properties that is they are both injections and surjections those are the mappings that are called bijections injective and surjective it means it's bijective do you know one of the most handy uses of bijective mappings I'm not even talking about linear maps I'm just talking about maps that are what is it's what is one of its uses you can actually show or compare sizes of sets yeah the cardinalities when I say a map is one-to-one so suppose there is a set S1 from which you map to a set S2 just in a very naive set theoretic way if I'm saying something is a one-to-one what is the definition of one-to-one I'll define all this more formally but what is the definition of one-to-one loosely anyone would care to volunteer what is the how do you define a one-to-one mapping yes so in other words if I have this mapping f yeah which maps things from here objects from here to here I can say this that f S1 is equal to f S2 if and only if yeah S1 is equal to S2 in general unless you pick up the same object on this side it will never lead to the same object here on this side so this is the condition to check now with that in mind you see if you have a mapping that that is defined over this entire set S1 that means every object here gets mapped to a distinct object here so at least this fellow S2 contains as many objects as the number of objects in S1 countable uncountable I don't care you see that's the that's the idea of a mapping right so that's that's that's the reason why it says that if you have one-to-one mapping it definitely means that the cardinality of S1 is less than or equal to the cardinality of S2 and in that same manner this surjection turns out to be a dual of that notion what does it say surjection says on the other hand so what is the definition of surjection a non-to mapping so every element in S2 must have a pre-image in S1 that means for every element that I pick up here I should be able to find out some object here yeah what does that tell me that this fellow yeah sorry so what is the cardinality so for let me just write that down so if for any S belonging to S2 there exists S hat in S1 such that f of S hat is equal to S yeah this is the definition of on to then f is on to so anything that I pick out from here S2 I should be able to find out a fellow that maps me to that fellow here what does that tell me in other words so I'm picking out arbitrary fellows from here so now my base is here I'm picking out an object here I know there's someone here I'm picking out another object here and there's someone here yeah so if I keep picking out objects here because there for every fellow I'll be able to do it let me exhaust every fellow here and what have I ended up with is it same but I may not have exhausted every fellow here right for every fellow here I have picked out someone here no of course the same fellow cannot map to multiple things that's a one-to-many mapping that's not a proper mapping right so it's either one-to-one or many-to-one it cannot be so so every fellow here is at least taking out one fellow here if not more if it's a many-to-one then for every fellow here there might be multiple fellows sitting here but nonetheless for every fellow after I've exhausted this set entirely there might still be fellows here even though even if not if there was a many-to-one mapping in any case this set is richer or this set contains more number of points or more objects than this set so in this case I would definitely be able to argue that the cardinality of s1 is greater than or equal to the cardinality of s2 so now it is just a trivial step to see that if you have a bijection and mind you I'm not even getting into linearity we'll talk about it in the next lecture when we talk about linear transformations and their properties vis-a-vis I'm just talking about simple mappings set mapping from a set to a set I'm not even talking about vector spaces right I'm just saying that if you have this sort of a thing and if you have a bijection it means it's a way it's a convenient way of showing that two sets have exactly the same number of points is it not and these mappings need not always be expressed in an analytical fashion for example if I take a line segment like so I've probably shown this example somewhere earlier and I take a line segment like so and I take this vanishing point here all right and I draw lines from here from here from here from here and I this is a mapping it does not have to be like x is equal y is equal to some fx so I'm taking a point here it maps to here look at this that's a 1 to 1 on 2 from basic geometry right so this is a bijection which tells me that no matter how long the line is there are exactly as many points in this line segment as in this line segment so any two finite line segments have exactly the same number of points there are very funny things these need not be linear all the time in fact that that's the same way you can cook up maps and you can show that the number of points in r and c yeah you can try looking up some of those interesting conclusions that they actually turn out to have similar cardinalities of course with the caveat mappings are not linear but again that's a matter for different discourse we will not get there in the next lecture we shall look at these two properties in the light of these linear transformations that we have learned and see how we can massage them to have a nice convenient check so if someone tells us that a linear transformation is 1 to 1 what are the sort of immediate conclusions that I can draw for a linear transformation and someone tells me it's on to what are the conclusions I may draw and if someone tells me it's a bijection why is it the case that I'll always be able to invert it why should it always be invertible we'll see all of those things in the next lecture thank you