 So, so far we have discussed several properties of matrices from the concept of linear algebra and also we had seen methods to obtain Eigen values and Eigen vectors of a matrix which will be used in many applications including some of our algorithms in pattern recognition. The other important concept which is related to matrices and linear algebra is vector spaces. We must understand this because we will be seeing that most of the algorithms in pattern recognition they work in certain dimension in a certain space and there will be the concepts of projection on to certain lower dimension subspaces and what are the corresponding set of vectors or numbers which represent or what we call as span that space we need to have some understanding in this course a very elementary one before we proceed to algorithms in future lectures of pattern recognition. So what are vector spaces by definition by definition a real vector space V a real vector space V is a set of vectors together is basically set of vectors a collection of vectors together with two operations together with two operations and what are those two operations addition a scalar multiplication that means I should be able to add two vectors multiply a vector with a scalar quantity and when I do these two operations with the set of vectors which belong to the vector space V it should satisfy a set of properties that means this should work under the following conditions or properties which are also sometimes called axioms this is the very basic definition of vector spaces and we will give examples. So I will read again real vector space V is a set of vectors together with two operations what are the addition and scalar multiplication. So these two operations on the set of vectors should satisfy the following properties the each of these ten properties will give those ten names and the corresponding expression for that and we will try to go this as briefly as possible the first of them is called the closure and that is a very casual word which is used term which is basically used which says that U plus V if two vectors U and V are in the space V I did not write that we will also define what are the scalars so let us we talk about a set of vectors so assuming these two U and V are set of vectors then this is in V when you add two vectors U and V it is in V it is in this vector space V some books will write that this is in V I am using this symbol to indicate that the vector addition of these two vectors U and V which lie within the space their addition will also lie within this you could ask me a question about what is a vector usually in two dimensional or three dimensional space this is represented by a magnitude which indicates the length of a vector and some direction in the two dimensional space x comma y if you take this x and y two dimensional space now of course it will probably also have a direction or orientation or angle so this is a vector V I can have another vector U here I can have another vector U here of a different direction and I am sorry I should not use the V here that is wrong because V I have used for a vector space V I have used for a vector space and I am using small letters to indicate the corresponding vectors so I should write it as V you have a different angle and a different magnitude you can visualize this in 3D as well so you can visualize as if this is one vector V and you have another vector U so there are two vectors V and V in three dimensional space in this room of course in the field of pattern recognition we will be dealing with very large dimensional vectors we will be able to visualize in two or three and imagine beyond three in pattern recognition typically you have dimensions of hundred or few hundreds often millions in certain cases easy to visualize in two or three dimension so if U and V are lying within the set of vectors in the vector space V in two dimension the corresponding vector this will be U plus V this also lies within the vector space that is the idea which you have but let us look at the second property commutivity or commutivity okay which says that U plus V will give the same result if you add the vectors in the reverse order it does not matter in which order you add the vectors correct very close property with vector addition comes the associativity or also called associative property let us take three vectors U V and W then the question is which one do you add first well in this case let us say if you add these two first you get the same results as to be very correct you should add the corresponding vector sign to indicate that these are all vectors some books might use a bold sign to indicate these vectors in vector spaces some books on vector spaces we will actually might use a different symbol or a bold sign to indicate this is a vector I am using this arrow at the top to indicate this is a vector this is a vector space okay so there are a few others such properties or axioms as they are called for vector spaces I will quickly go to that okay I will write the properties of the equations and just write the terminology associated with it if this is 0 vector this is a property with a 0 vector which basically says that if you add a 0 vector either way you get the same result this is an element of addition identity element of addition when you have a 0 vector within a week the fifth one is called the inverse elements of addition it says that if you add the inverse of a vector basically add the vector in the reverse direction then you will get a null vector or a 0 vector as it is called sometimes called a null vector but 0 vector is probably the best way of addressing that the null word is used in some other space inverse elements of addition okay these are trivial ones then it is called a scalar if you have a vector u again you must put these vector signs all over if you have a scalar value c it is not a vector so I am not putting the vector symbol okay this is also in that means if you tap a vector u or v like the one which you have seen or w multiplied with a scalar quantity so far you have used u v w as vectors and we are going to use c at some other scalar quantity like c or d or a and b okay so this is also in v in the number 7 if you take two scalar multiplications it is also equivalent to doing something like this this is called commutability over scalar multiplications okay this is a simple scalar multiplication commutivity of scalar multiplication you can have a distributive property also of scalar multiplications with respect to addition so that means if you have a vector u and you multiply try to 1 multiply that with a quantity which is in addition of two scalars c plus dr scalar numbers like here see I have not put the vector sign then this can also be written as you can also write similarly which is the distributive property over scalar multiplication with respect to addition scalar multiplication with respect to addition of vectors this is called the distributive property and here it is called a distributive property of scalar both are distributive property of scalar multiplication with respect to additions actually the only difference is that you are distributing it over the vector addition here you are talking about multiplication with a vector that is the difference between the two and the last one it should also be there if you multiply it with a scalar quantity called 1 and the element of scalar multiplication you get the same so this ten properties are axioms combined with this definition that you have a set of vectors and there are two valid operations which you can do on any of these vectors in the vector space define your vector space define your vector space you must keep this in mind when you do any of these operations before going to discussions of more in detail about vector spaces which deals with vector subspace let us say there is a concept of span what is the span of basis vectors subspace basis vectors in span by the basis vectors we look at a couple of small examples of vector spaces so examples we of course go with numerical examples but the vector spaces can be span by even matrices so let an m 2 x 2 matrix given as a b c and a 4 elements which are all scalar quantities they are all real scalar quantities from the set of all possible matrices in fact possibly infinite set of matrices then the all set of matrices put together can form also a vector space that is also possible which also include the zero vector zero matrix this is also a valid vector within that vector space another example set P n of all polynomials of degree that must be given created than equal to n can also form a vector space of course set of all polynomials with degree less than equal to n also form a vector space of course there are some constraints which says that if the addition is defined by adding coefficients of certain polynomials like powers and scalar multiplication is multiplying each coefficient by a scalar so there are certain conditions which has to do with the coefficients of this polynomials and the corresponding power then they can also form but what it basically means is that the set of polynomials also can form a vector space so after introducing the concept of vector spaces we will extend our concepts further to sub spaces of vectors so what is a space as written here a subspace of a vector space V is another non-empty subset H it is a non-empty subset H that satisfy the following properties what are they first of all the zero vector of V is in H so if this zero vector in V it should be part of H as well and number two point if x and y belong to the subspace a subset H you are talking of a subset H which actually forms a subspace of V then the addition of those two vectors also will be in H which is the subset of vectors forming a subspace and if x belongs to the subset H correct then a scalar multiplying c is a scalar constant if you multiply the vector x by a scalar that also should be in H let us take an example now to understand this so let H be a vector given as say a and b are real numbers a and b are real numbers you have to show that H is a subspace of in three-dimension so the first thing which we can do is verify these three properties with respect to this H so if you take a zero vector in three-dimension that is possible with H because you are just have to set a and b both equal to zero that is just giving away take x and y belonging to H this also satisfies because if you just think of two vectors say a10 b1 as your x and let us say y is some other because they are lying within H say a20 b2 very simply what you do addition of these two vectors will give you very simply a1 plus a20 b1 plus b2 which is also within H that is very simple to find out the second property also holds good this is trivial to prove because you are just multiply these by scalars and still remains so this is a simple way to show that this forms a subspace of R cube because in R cube that is in basically in three-dimension space which I am casually calling here the subspace is in two-dimension because the second dimension will be equal to zero here this two of the components are acting so based on this the last concept which we are going to talk about is what you mean by basis vectors and what is its relation with the dimension of a vector space and what you mean by the basis vectors spanning a space or a subspace the relationship between basis dimension and span let us define the word basis basis is a set finite or infinite although we will be mostly worried about a dealing with finite basis set b where b consists of elements sorry not this way bi where bi are vectors so basis is a finite or infinite set b consisting of the set of vectors with some indices i typically taken from set of integers bi and typically often indexed by the set i sometimes you will such refer that this is the first vector second vector by the set of integers here that this that it spans that spans the whole set the whole space that means the set of vectors which form the set b it spans the whole space and is linearly independent the spanning the whole space this is a term which is casually used spanning the whole space means the following what does it mean spanning the whole space you must understand this and then this will probably be the spanning the whole space means that any vector v in the space which we are considering can be represented by a i b 1 spanning the whole space means writing a vector v in the space represented as a linear weighted sum of the basis vectors b 1 b 2 up to b 1 each of them multiplied by a corresponding scalar a 1 a 2 and so on up to a n and these are simply scalars of course there is a constraint of linear dependence and independence here which will come to that between the basis vectors bi's but the spanning of the whole set means that this combination represents any arbitrary vector v and given a vector v as long as a unique combination of these a is you can actually form another vector like this with the coefficients a 1 a 2 up to a n which is found by the coefficients of of these basis vectors as long as this combination of the coefficients are unique then you have linear independence of the basis vectors that means the basis vectors can be linearly dependent or independent I will give examples of that using a figure but if they are linearly independent then I can find a unique set of a i's to represent any arbitrary vector v however if that if the basis vectors are dependent then I can have several combinations a non-unique set of combinations of these coefficients here to represent an arbitrary vector v okay let us take an example the most simplest case of basis vectors which we have in three-dimensional space could be our x y z very simply in three dimension this is corresponding to three-dimensional basis vectors correct this can be extrapolated to n dimension also saying that each consists of elements here is a three elements in each in elements in each and look at the if you look at this is a matrix then we simply can identity matrix with the diagonal trace of one the rest of all zeros and these are examples classic examples of linearly independent basis vectors I repeat linearly independent basis vectors which span the corresponding dimension in this case this spans the space of dimension n okay if you just take this combination then you are talking of two dimensional space x and y coordinates okay and if you want me to write this I will just rub off this example in this particular form this in this particular form will indicate that as if I have something like the components of v as v1 v2 and so on up to vn is the components of an n-dimensional vector and I can write this as equal to select in this form say a1 then 1 0 0 0 plus a2 0 1 0 so on which is wrong I am writing v1 v2 vn are replaced by this corresponding vectors e1 e2 in which I just wrote and since these are linearly independent if I take an arbitrary vector v with corresponding components then I need to have unique set of a is which will represent this vector however there may be a case where if I do not have linear independent in the basis vectors of bi's or the i's as given here or maybe even here these are the cases where they are linearly independent means I cannot represent any basis vector as a linear combination of the others anyone cannot represent others then these will be unique and there will be cases when they are not linearly independent that means they are linearly dependent rather than being orthogonal like this then I will not have a unique set I can have several a is to represent this vector I will show stop with one example in 2d in two-dimensional space x, y let us say these are my e1 e2 so this is say 1, 0 so I am talking of two-dimensional space here and then I have e2 here which is equal to 0, 1 then I take another set let us say so with this what it basically is if I have a vector v and I want to represent this vector v with two components v1, v2 as a linear combination of e1 and e2 I have to just project it along both the dimensions and find out the length of the components along e1, e2 and that will give me my corresponding coefficients which will be unique in the sense that I will not have any of the two components and plug it into here and represent that vector v that is not possible but if I do not have linear independence that means the vectors are linearly not independent of another that means it is something like a drawing let us say here is one such vector there is another vector here which I write as well I let this is f1 and this is f2 like you had e1 and e2 you are having e f1 and f2 and you follow the same principle basically what you need to do is this is the point p which represents the vector v and as you projected along e1 and e2 you need to do the same thing so to project along f2 you have to travel along orthogonal to this so I will use the same color and to project along this direction here this is also a representation but this one be a unique representation why will not be a unique representation because the two components f1 and f2 are not orthogonal to each other that is number one that means you can represent f1 in terms of f2 actually or vice versa and so you will have many combinations possible actually what you can do now is if you can represent f1 as a function of f2 then you are able to represent v as a function of only f1 or f2 by itself you do not need both and that shows the reason of the linear independence here of the x and y if you take this to be two axis e1 e2 with respect to f1 f2 so this v is some alpha 1 f1 plus alpha 2 f2 versus if you write the other one which is v equals beta 1 e1 plus beta 2 e2 this one is a unique representation with beta 1 e2 because the e1 e2 actually are linearly independent basis vectors spanning the two-dimensional space this is not but you can represent them you can still represent by a non-orthogonal set f1 f2 which are not linearly independent that is also possible typically in most representations a talk of this support and this basis and in that particular case we will say that the e1 and e2 spans the entire subspace in two-dimension that means all vectors v in two-dimensional space can be represented by a linear combination like this on the two basis vectors e1 and e2 so that is what is the mean so the two-dimensional space can be visualized as a subspace in three-dimension and three-dimension space for certain higher dimensions so we stop here with this and hope this knowledge helps you to understand principles of pattern classification and pattern recognition.