 Welcome, this is the last lecture in the module of linear algebra. This lecture is a little abstract. In this lecture, we try to consolidate all the conceptual ideas on which we have worked till now from fundamental concepts of vector spaces. We quickly recapitulate the definitions of group and field and then continue the discussion on vector space, linear transformation, etcetera. In which we find that the mathematical and computational tools with which we have been working till now are all the product of the basic abstract ideas in this area. First the group, the mathematical structure of group is defined with the help of a set G and a binary operation say denoted by this sign plus, fulfilling these relationships. First relation is the closure basically that is the definition of the binary operation that is two members in the set through this binary operation produce the result which is also in the same set G. That means the binary operation is defined within that set. Other requirements are associativity of the binary operation, existence of an identity element that is there must exist an element in the set which added to any other element from the left side or the right side produces the same element back. So, that identity element we can denote as 0 and finally, the existence of an inverse. That is for every element A in the set G there must be another element can be denoted by minus A which added to A from this side or this side gives the identity element that is 0. If these are fulfilled then we have what is a group that set G and the operation plus together define the group. You can take these examples, examples of integer with the ordinary addition, examples of the set the members of the set of real numbers with the same addition or the set of rational numbers other than 0 that is the set of non zero rational numbers with the multiplication then 2 by 5 real matrices with matrix addition. So, all these constitute examples of group the group structure is evident in all of these. Rotations also rotations in the geometric shape geometric space is also an example of group that is if you compose 2 rotations the resulting complete movement is again a rotation. So, and that also fulfills these conditions. Now, if A plus B and B plus A are equal for all A and B then in particular you have got what is called a commutative group all these are actually commutative groups but rotations are not that is 3D rotations are not commutative. So, this is an example of a non commutative group there is something called subgroup a subset of G with the same binary operation can constitute a group itself if it fulfills these requirements in that case that is called a subgroup of the original group. Now, with this definition of the group in the background the definition of field becomes easier a set F with 2 binary operations one of them we can denote as plus resembling the ordinary addition and the other we can denote as dot resembling the ordinary multiplication that satisfy these requirements will define a field what are these requirements one is that F set and plus binary operation together is a commutative group and for which we will denote the identity element as 0 and then from the set F if we remove this 0 the identity element of the group here then what remains with that the multiplication dot forms another commutative group and apart from these two commutative groups we have the distributive property that is the multiplication is distributive over addition if this also holds for all a b c in the set F then we call these together this F with the two binary operations define what is called a field now this concept of field is actually the abstraction of a number system. So, whatever is defined here formally applies to all the number systems that we use the rational numbers real numbers complex numbers all these fulfill these requirements and so all of these are examples of fields they are complete number systems complete in some sense and here you find that we already have an example of subfield the set of real numbers is a subset of the set of complex numbers and with the same addition and multiplication rules of complex numbers you can define real addition and multiplication as well and therefore, this R is actually a subfield of this C now we have got groups and fields defined with the help of these we can define what is called a vector space a vector space is defined by first a number system a field F of scalar elements of this set F are quite often referred to as scalar then a commutative group V of vectors there is a commutative group V of vectors with its own addition rule apart from these we have got a binary operation between this field and this commutative group that is a binary operation between a scalar and a vector and that binary operation we will call as scalar multiplication such that for alpha beta scalars and a b vectors these relationships hold that means that first requirement is that a scalar multiplication a scalar multiple of a vector is also a vector that is you take a vector a from V and an alpha from the number system a scalar alpha from F then this scalar multiplication result alpha a is again in the set V of vectors. So, that tells us that the scalar multiplication operation is defined there is an identity element that is there is an a scalar that unity that multiplicative identity of F with multiplied to the vector gives the same vector back associativity if we have to multiply a vector say with 3 first and then with 2 if the result is same if we multiply in one shot with 6 and if such things happen for all alpha beta in F and all a in V then we will say that the scalar multiple operation is associative as well and then there are 2 distributive properties scalar distributivity and the vector distributivity when all these conditions hold then we say that what we have got is a vector space V in which a lot of a lot of vectors are there and all these vectors are defined over the field F of scalars. Now note that all these conditions here are expressed in only this much, but actually if you open the definitions of commutative group and field and so on then it is actually much larger here in one shot we have actually got 11 small conditions here again another 5 so 16 and 5 so 21 conditions are actually written concisely here in terms of field and group. Now R n C n that is n dimensional real coordinate vectors then n dimensional vector space with complex coordinates R n C n these are all examples of vector spaces over the field of real numbers over the field of complex numbers and so on m by n real matrices will again form a vector space of their own and so on from here you will note that the way field is an abstraction of the number system the vector space is actually an abstraction of the ordinary geometric space in which we live. Now that we have got this formal definition of a vector space let us try to examine its contents first of all we already know that there must be a 0 vector in the vector space that is necessary from this requirement that V set is a commutative group that means it has its own identity element so there must be a 0 element in the vector space that is essential otherwise it will not be a vector space at all other than 0 if possible we take a vector xi 1 in the vector space. Then with this xi 1 this vector which we have picked up from the vector space we get a lot of scalar multiples take a scalar alpha 1 and for all such alpha 1 from the underlying field f we can get several other vectors alpha 1 xi 1 all such vectors we develop and from the definition of the scalar multiple operation we know that all of these are vectors that is all of these are in V. So these vectors which can be generated from the vectors xi 1 through a scalar multiple alpha 1 all these vectors are said to be linearly dependent on xi 1. Now after we finish all these vectors like this now we ask are the elements of V exhausted have we finished all vectors all the elements in V if not then we take another vector xi 2 in V which cannot be expressed like this because we have not exhausted it we have not exhausted V by taking all of these so we take outside an outside element xi 2 which we could not express like this. So that is linearly independent of xi 1 now alpha 2 xi 2 will give many other vectors and then alpha 1 xi 1 which gave the earlier vectors and alpha 2 xi 2 which gave many other vectors now they can be added together in all combinations and we get lots and lots of vectors in V. Suppose we pick up all of them after that we ask the same question again in these two rounds have we finished all elements of the vector space V and like this we keep on asking. Now before asking again and again and again the same question let us ask the ultimate question will this process ever end it may not on the other hand it may suppose it does suppose this process of asking and picking up another picking up another suppose this process ends if it process ends then we have got what is called a finite dimensional vector space the dimension of the vector space is finite. So in if this process ends then we will say that V is a finite dimensional vector space on the other hand if this process never ends then what we would have got is called an infinite dimensional vector space. So in this particular case for the time being suppose we consider finite dimensional vector spaces. So in our instance suppose this process ends after n such choices of fresh linearly independent vectors. So that will mean that all the vectors in V can be expressed in this manner because in n rounds we have actually added these n contributions. So and exhausted the contents of V that means that all vectors in V can be expressed as this linear combination that is a general vector sky in this vector space has this expression nothing else is there in that vector space which cannot be expressed in this manner. So then we say that if this process ended with n choices of linearly independent vectors then we say this n this number is the dimension of the vector space. So n linearly independent vectors we could find more than that we could not find. So this n is the dimension of the vector space. These vectors chi 1, chi 2, chi 3 sorry xi 1, xi 2, xi 3 etcetera up to xi n these n choices that we picked up they are an ordered basis they are an ordered set they form a basis to represent all vectors in the vector space. And for a particular vector chi the corresponding coefficients alpha 1, alpha 2, alpha 3 etcetera turn out to be coordinates of the vector chi in that basis. Now we know that R n, R m etcetera vector spaces over the field of real numbers which we have already studied are such finite dimensional vector spaces. For a vector space if a subset of it forms a vector space in its own right with the same underlying operations then we say that that constitutes a subspace. For example, in the three dimensional space with this frame of reference a plane passing through the origin will actually define a subspace because 0 original element will be there. And all the operations that we could define in this space we can define the same operations within this plane itself. So in that case we say that that plane actually consist of constitute a subspace of this three dimensional vector space which is R 3. Now with this understanding of vector spaces we consider two vector spaces V is one vector space and W is one vector space. And we consider a mapping P from vector space V to W and then we define what is called a linear transformation. If the mapping has this property then we say that this mapping represents a linear transformation. Now you see you take two vectors A and B in V. Then T of A mapping is a linear transformation of A and mapping of B will be vectors in the vector space W. Now in V if you make a linear combination of A and B in this manner alpha A plus beta B with alpha beta lying in the underlying field F then this is also a memory V. So this can be mapped through the same mapping T. So that mapping we find. Now this relationship is the relationship between the mapping of a linear combination of A and B and the individual mappings of A and B. Now if the linear combination of A and B gets mapped to the vector W which is exactly the same as the same linear combination of E A and T B on that side. Then we say that if this happens for all alpha beta in F and all A B in V then we say that this mapping is actually a linear transformation. It is as if on one side we mix two liquids in a particular proportion and then boil. In the other instance we boil two liquids of the same quantities and then we mix the vapors. If the result in both cases is exactly same then the whole process is behaving something like a linear transformation. So this is the underlying requirement for a linear transformation in which V and W are vector spaces over the same field F because this alpha beta which is used here in composing a vector in the space V is the same alpha beta here which is also used to compose a vector here in the space W. So the two vector spaces must be over the same field. Now that we have defined a linear transformation like this with this requirement. Now if we want to describe the linear transformation how do we describe it? One way to describe the linear transformation is to describe how several vectors get mapped. If we have a big bunch of vectors in V and for each of them if we can establish the mapping if for each of them we can say where these vectors get mapped and through that if we can find out the mappings of all vectors in V then you would say that we have described the linear transformation T. But then in the vector space V there are infinite elements we are not going to enumerate the mappings of each of them. So we want a description which is complete but not that detailed. So for that we again take the help of the basis that we have defined. For vector space V there is a basis say xi 1 xi 2 xi 3 up to xi n say it is an n dimensional vector space. For W again similarly there will be a basis say eta 1 eta 2 eta 3 up to eta m. Then for xi 1 in V which is a vector in V xi 1 xi 1 vector in V gets mapped to T xi 1 which is in W. Now how do we describe that T xi 1 in W? So that description is as a linear combination of the basis members of W. Suppose it is like this where a 1 1 a 1 a 2 1 a 3 1 etcetera are the scalars in F. Then this is a description of how xi 1 gets mapped and that will immediately give the description of how all alpha 1 xi 1 type of vectors in V get mapped. Similarly if we can describe how xi 2 xi 3 xi 4 get mapped to the vector space W then up to xi n if we define like that then in effect we have described the complete mapping complete transformation. Because all other vectors in V are actually linear combination of these only and we can map them individually and work out the same linear combination in the target domain target space W. So now you find that a 1 1 a 2 1 a 3 1 etcetera up to a m 1 describe how the image of xi 1 is described in the target space W. And such other elements a 1 2 a 2 2 etcetera all these kinds of scalars a i j will similarly describe how all the basis members of V get mapped to W and how they are described there in terms of eta 1 eta 2 eta 3 etcetera. So that tells us that this coefficients of here are actually collected together in the vector a 1 coefficients of other ones other mappings are similarly collected in the other columns in this matrix. So thus we find that this matrix a with which we have been working all this while is actually the code of the description of the linear transformation from this vector space W to the vector space from the vector space V to the vector space W. So this matrix essentially has the elements scalar elements m n of them from f with encodes the description of a linear transformation. As we have earlier discussed there we wrote this coordinates as alpha 1 alpha 2 now if we write them as x 1 x 2 then we say that a general element chi of V can be expressed as a linear combination of the basis members of V. And that in our ordinary representational tool we represent as a column vector x 1 x 2 up to x n transpose. So this column vector is actually a listing of the coordinates of a vector chi in the vector space in terms of the basis members right. Now similarly the mapping T of chi will be this and consulting the T xi 1 T xi 2 which we just now worked out we will find that the coordinates of this will be the same as what we have as the elements of a x. So the mapping will be found to have coordinates which are elements of a x right. So thus we find that the basis vectors of V the domain of the mapping domain of the linear transformation get mapped to vectors in W whose coordinates are listed in columns of A the matrix and a vector V having its coordinates in x will get mapped to a vector there in W whose coordinates will be obtained from the product matrix vector product A x. So the understanding here in this whole discussion is that a vector chi is an actual mathematical object in the set V it is a vector and the column x n dimensional column vector in R n is merely a list of its coordinates and T from V to W is a linear transformation which is an event in the which is a situation the description of it is actually stored in the rectangular area of numbers which is the matrix A. Now therefore by changing the basis of V and W you find that the coordinates x get changed in order to describe the same object which is the vector which is a geometrical entity which is a geometrical object and similarly the linear transformation which is a geometrical event that remains same but with the change of basis of the vector spaces the corresponding matrix encoding or the matrix representation changes as we have seen earlier in the context of basis change. Now in this entire scheme we find that the matrix representation emerges as the natural description of a linear transformation from one vector space to another. So this arrangement of writing the matrix in the form of rectangular array is something that is a natural outcome of the way we think of linear transformations from one vector space to another which has a deep geometric meaning. Now as an exercise you can consider this all linear transformations that you can define from one vector space V to W. They also can be collected together to form a set. So all these T's so from one vector space V to W you can define several linear transformations. So all of these linear transformations if you collect and then the collection of these linear transformations that itself forms a set of linear transformations it forms a set and you can verify that this set in itself actually defines a vector space forms a vector space. You can analyze and describe that vector space in the context of its dimension its element the way its elements get added and so on. So that I am leaving for you as an exercise. So all linear transformations from one vector space to another they together actually form a vector space of its own. Now let us continue this discussion into another very important point that is isomorphism. Consider a linear transformation T from vector space V to W and a transformation of the kind which establishes a one to one correspondence that is for every vector in V you find a vector in W and for every vector in W you find a vector in V that is a one to one correspondence. One element of here is directly related and linked to exactly one element from there. So in that case you will find that the linear transformation T will define a one one on two kind of mapping and this mapping is invertible and for that the dimension of the two vector spaces must be equal. If from here for whichever vector you take you get exactly one there and for every vector there you get exactly one here that is a mapping which is invertible. And so you can represent the you can denote the inverse linear transformation like this. In this type of situation is in this kind of a situation we say that T defines or T is an isomorphism equally organized similarly organized. So that means that V and W are two vector spaces which are similarly organized isomorphism they define T defines an isomorphism and you say V and W are two vector spaces which are isomorphic to each other. And from the definition of equivalence relation you can show that isomorphism turns out to be an equivalence relation and therefore we can call V and W to be equivalent to each other. And they are equivalent in practice in the ordinary sense of the term equivalent also in the sense that if we want to perform certain linear operations among vectors in V it will be equivalent it will be same if we first map these vectors to W and then we conduct the same operations in W and the results we map back through the inverse mapping. So in that sense it will be actually equivalent whether we conduct our actual operation here or there as long as we have two way communications through the isomorphism. Now consider two vector spaces V and W over the same field and of the same dimension N then can we define an isomorphism between them answer is of course we can in fact we can define as many isomorphisms as we want you will find upon a little reflection that any square non singular matrix will actually give you one such isomorphism which connects this elements of this vector space with elements of that vector space in a 1 1 on 2 fashion in a 1 2 1 correspondence. So as many as we want we can define isomorphism so any non singular matrix will actually define one such isomorphism the simplest one is identity where the N basis members from V get mapped exactly to the N basis members in W in the same order that is the identity transformation with in the matrix terminology is the identity matrix. So you find that the underlying field and the dimension together actually completely specify the vector field because other than that even if you define two vector field which are vector spaces which are defined over the same field F and same dimension N that means whatever you can do in one you can as well do in the other. So in all practical terms they are actually the same vector space so that is why we say that the underlying field and the dimension together completely specify the vector space for all practical purposes and when we say for all practical purposes that is in another way of saying up to an isomorphism that is other than that whatever difference is there that is basically only in the details from one of the vector spaces you can always go to the other vector space and come back through that isomorphism one to one correspondence. So you find that all N dimensional vector spaces over the field F are actually equivalent so they can be considered as same in particular the vectors with which we are we have been dealing the representation the column vectors in which the coordinates are just listed. So in particular all of these N dimensional vector spaces are isomorphic to F and itself the column vector the listing of coordinates that representation is also a vector field to that is also a vector space so with that it will be equivalent and therefore we find that the representations the column vectors the listing of the coordinates themselves can be taken as the object. So for practical purposes there will be actually no difference in between and that is why we after studying one vector space of N dimensions over a scalar field we do not have to study another such vector space of the same dimension over the same field again we have actually studied all of them in one shot. Till now we have found a lot of geometric ideas in the algebraic description of the vector space and now we bring in another idea from geometry into the algebraic representation and that is the idea of directions and angles and that we get from the definition of inner product. In a vector space over the set of real numbers over the field of real numbers or complex numbers we can define an inner product which is denoted with this sign a comma b in parenthesis and the definition of that is this that is it is a function which takes two vectors from the vector space v and as a result produces a scalar in the field F. F can be R or C real or complex numbers such that it is defined for all vectors a b and it is it has the property of associativity. If you multiply one of the components with alpha then the product the also gets multiplied with alpha that is the associativity it has the distributivity and it has conjugate commutativity this operation is not just commutative it is conjugate commutative that is for real field it will be commutative the inner product of b and a will be the same as the inner product of a and b for the complex field it will be bar of a b b a will be the bar of a b that is it will be a conjugate of a b. Now, this essentially means that if you take a a as the two vectors and try to work out this inner product then that can be a comma a as well as a comma a conjugate and these two have to be equal. So, this conjugate commutativity forces this a comma a inner product to be real then you can talk of it is being positive or negative real number and this is another requirement which makes sense in that context that it is it has to have positive definiteness that is this must be positive or zero and it will be zero only if a is equal to zero. So, a product satisfying all these requirements is defined as the inner product these are all examples of inner product these are all examples of inner product in this you will make note of this particularly a transpose w b while defining an inner product as a transpose w b in the field of over the field of real numbers one must be very careful about ensuring that this wet matrix w is positive and symmetric and positive definite. This is another point which we have discussed earlier and this is the reason of that that is if the matrix w is not positive definite then this condition may get violated for all a this will not be correct and that is why for defining a wetted inner product one must ensure that the wet matrix w is positive definite now a vector space possessing an inner product is called an inner product space for the field of real numbers we call that space as euclidean space and over the field of complex numbers we call the space as unitary space most of the time we have been talking about actually euclidean spaces of several dimensions. Now, I make this point that for the rest of the course also most of the time our discussions of multidimensional vector spaces will be mostly associated with the euclidean space. So, r n r m etcetera are actually all n dimensional m dimensional euclidean spaces. So, we find that inner products bring in ideas of angle and length in the geometry of vector spaces and you know that the inner product a comma b will have if you consider your ordinary definition of dot product that will have size of a into size of b into the cosine of the angle between the two vectors. So, the idea of angle comes into picture in particular we say that the two vectors a and b are at right angles if their inner product is 0. So, the question of orthogonality comes into picture then the size size of a vector norm comes into picture that is norm is again a function from the vector space to the set of real numbers such that the norm is actually equal to the square root of the inner product of the vector with itself and see it must be positive and you will find these are some properties of the inner product and norm taken together. Associativity that is the norm of alpha times a vector is mod alpha times the norm of the original vector and so on positive definiteness which we have already seen these are two important inequalities triangle inequality a plus b norm is less than equal to norm of a plus norm of b and this is the pushy Schwarz inequality the inner product will have an absolute value which is the less which is less than or equal to the product of the sizes of the two vectors this is known as the pushy Schwarz inequality based on these you can also work out a distance function or a metric. So, if you have two vectors then you can work out a distance function between the two vectors in the sense of joining the arrow heads and working out the size of that vector that is this with this much discussion on the vector spaces of finite dimensional finite dimensions. Now, let us go a little into the discussion of infinite dimensional vector spaces the set of continuous functions over an interval provides such a vector space of infinite dimensions and that is known as the function space. Suppose we are working with a lot of continuous functions and we want to represent a function like that real valued continuous functions over an interval by the listing of its values over several values of x like this from a to b. The true representation of the function will require this capital N to be infinite because the function is continuous function over the entire continuous domain a to b full interval. Now, this vector that we have written here and n dimensional vector capital N column vector. So, if we take several functions and for each function each function we work out its values at these n points and get such column vectors then will all these possible column vectors together form a vector space answer is yes because they are column vectors and all sorts of continuous functions over that interval we can keep in the discussion. So, they will form a vector space of dimension capital N and for more and more precise true representation as we keep on increasing capital N and try to take it to infinity then we will have an infinite dimensional vector space. So, the set this capital F of continuous functions over continuous real valued functions over a b form a vector space which is infinite dimensional and you can verify that it is infinite dimensional vector space. If you can just check whether this forms a commutative group and whether the vector space conditions are met. So, that is a basic verification with the definitions of group and vector space with it is interesting to conduct one. So, far as commutative group is concerned suppose a and b in place of a and b you think of f 1 x and f 2 x. Now, f 1 and f 2 two continuous real valued functions. So, there are some is another such continuous real valued function defined over the interval a b and similarly they represent they fulfill these requirements and there is a 0 function and for every function f you can define a function minus f and then f 1 plus f 2 will be the same as f 2 plus f 1. So, they will form a commutative group and apart from that for being vector space these are exactly the vector space conditions which we have in a way copied from there. So, you can verify all of these and find that all of these conditions hold. So, that will mean that the set capital F of all such real valued continuous functions over the interval a b among themselves form a vector space of infinite dimensions and listing of values at selected points is actually just a basis to describe all such functions. We can talk of linear dependence and linear independence of these functions. So, two functions f 1 and f 2 if they have this relation or this relation which is equivalent actually. So, then we say that they are linearly dependent and if it happens that this linear combination equal to 0 necessarily implies that all k 1 and k 2 this k 1 and k 2 both are individually 0 then you say that f 1 and f 2 are linearly independent from each other. In general among n such functions you can say that if you can find k 1 k 2 k 3 k 4 up to k n not all 0 together such that you can make their linear combination 0. Then you say that these functions among themselves are linearly dependent. On the other hand if you cannot find such non-zero set k 1 to k n that is not all 0 together with that understanding that is if you find that f 1 and f 2 f 1 f 2 this linear combination is 0 essentially implies that all of them have to be individually 0. You cannot find a non-zero set making this linear combination as 0 then you will say that these functions are linearly independent. So, these notions we will be using later in detail as tools when we study differential equations. So, you see one x square x cube etcetera are actually a set of linearly independent functions and quite often this is used as a basis to describe such functions. For example, when you say that I have taken a function f and we want to describe effect as a 0 plus a 1 x plus a 2 x square plus a 3 x cube etcetera basically you are using this set of functions as a basis. You can define inner product between vectors between functions. So, suppose f and g are two functions and then for f and g we can work out those large column vectors v f and v g which among themselves have a function have this inner product. Then as v f and v g are the function values then this will be same as this f x 1 g x 1 f x 2 g x 2 and so on. Like this you can work out a weighted inner product also which will be something like this if this weight matrix w is a diagonal matrix with w 1 w 2 w 3 etcetera sitting on the diagonal position. Now as the number n the number of such terms becomes extremely large then this large number of terms getting summed up gets replaced with an integral. And you say that the inner product in the function space is defined in this manner f x g x and this w i gets replaced with w x e. So, the summation gets replaced with an integral and this turns out to be the definition of the inner product in the function space. You can similarly define norm you can similarly talk about orthogonality. So, you will say that two functions f and g are orthogonal when their inner product turns out to be 0. So, in that case you say f and g are orthogonal functions, orthogonal with respect to the weight function w. You can talk about norm. So, if f and g in both places you put f then you will have w into f square and that integral you evaluate and take the square root of it that is the norm. You can talk of orthonormal basis. If you have taken the basis as functions f 1, f 2, f 3, f 4 etcetera each of which has a unit norm in this sense and every pair from which is orthogonal in this sense. Then you say that we have got an orthonormal basis for describing functions in that set f. So, for orthonormality of a set of functions you require this condition that is the inner product between each pair f j, f k in that basis must be delta j k. If j and k is same then this should be 1 and if j and k are different then it should be 0 for all j k. That way you will get an orthonormal basis for that function space. Now, how many such f 1, f 2, f 3, f 4 you will ask for? Since the dimension of the vector space is infinite you will need infinite members in the basis. That means that as basis you will require a family of functions which is of infinite members. Now, from this discussion we have these important points to note. First is that matrix algebra provides a natural description for describing vector spaces and linear transformations and whatever R n till now we have studied is actually the complete representation or enough representation for all n dimensional vector spaces over the field of real numbers. The third important issue that we have seen in this lesson is that through the definition of an inner product the key ideas of angle and length from ordinary geometry are brought into the discussion of vector spaces. So, these incorporate the key geometric features of physical space and another important issue discussed in this lesson is the topic of continuous functions forming a vector space of their own and we can talk of the function space of infinite dimensions. And later when we study differential equations we will also see how linear operators or linear transformations get a meaning in this kind of a function space. So, with this our module on linear algebra gets over and again I will quickly remind you that it is very important to go through these lectures along with the exercises because many of the ideas are actually left from the discussions in the lectures because we are trying to squeeze in a lot of topics into a single course and therefore, a lot of issues a lot of conceptual details will become clear when you try to work out the exercises and consult the solutions thereof. So, till now we have completed 15 chapters of the book and if you find it a little too hectic to complete all examples then a selection is given here in the tutorial plan which appears in the slides of the first chapter. So, if you complete this much then you would have got sufficient background to continue with the rest of the lectures that we take up later and you will find that since the one of the essential features of this course is the interconnections among several areas. So, you will find that though this module on linear algebra is formally over, but the ideas developed here will be used throughout the other modules of the course. So, after this in the next lecture onwards we will be handling the module small module on vector calculus and multivariate calculus and vector calculus that will constitute of these three chapters of the book chapter 16, 17 and 18. Thank you.