 Let me welcome you to this NPTEL lecture on linear algebra. This is usually offered in the first semester of a postgraduate course on mathematics, be it pure mathematics or applied mathematics. This is an absolutely fundamental course, absolutely fundamental area which one requires both in pure mathematics as well as in applied mathematics, for example in problems of engineering. Starting from the basics of linear transformations, vector spaces, inner product spaces, the idea is to prove theorems, that is one of the main objectives of this course. There will be lots of examples that we will be discussing in this course. The video material has been divided into several modules, about 13 modules are there. I will write down the titles of each of these modules and also probably briefly tell you what each module contains. Before that perhaps I should mention two books, both are classic in a sense. The first one is Halmos book, finite dimensional vector spaces. So let me just write down the title and the author, Paul Halmos is the name of the author. The title of the book is finite dimensional vector spaces. Let me give the reference to the latest edition. It has appeared in Springer in 2011. This is the undergraduate text in mathematics, so called UTM series. There is another classic book which is followed in many universities, two authors Hoffman and Kunze. The title is linear algebra, whereas I am referring to the Prentice Hall edition which appeared in 2004. Now let me give you the modules for this course. There are about, as I mentioned, there are about 13 modules in this course. This introductory lecture will be, this introductory lecture will be a brief one. The actual lectures will begin from the second lecture. So these are the modules that we have. Module 1, systems of linear equations. This is approximately in the lectures from 2 to 7, from the next lecture till the seventh lecture. We will be discussing mainly systems of linear equations. The notions that we will be discussing in this module are basically elementary row operations and then when do we say that two systems of linear equations are equivalent. Then we will look at the elimination process, basically the Gaussian elimination process which we learnt in high school for instance. We will formalize Gaussian elimination by means of the elementary row operations. In particular we will be looking at what are called as row reduced echelon matrices. Also the notion of elementary matrices, we will be studying both homogeneous as well as non-homogeneous equations. How do we characterize the existence of solutions of homogeneous equations or non-homogeneous equations in terms of row reduced echelon matrices, in terms of invertibility of the coefficient matrix, etc. So these will be the topics that we will be discussing in this first module. In the next three lectures, that is from lectures 8 to 10, we will be discussing the second module. So let me write down the title. The title is vector spaces. Lectures 8 to 10, approximately three lectures. Vector space, the axiomatic definition of a vector space, then lots of examples of vector spaces. We will also be discussing the notion of subspaces of vector spaces, again lots of examples of subspaces. And then spanning sets for instance, we will conclude this second module with the notion of linear independence of vectors, linear independent subsets of vector spaces. Module 3 will be basis and dimension. We will be discussing the notion of basis and dimension. This will be covered in about approximately four lectures, lectures 10 to 13. Part of lecture 10, we will discuss the notion of linear dependence, linear independence of vectors. So in this section on basis and dimension, we will discuss the notion of linear dependence, linear independence, look at lots of examples, some properties of linear independence subsets, etc. Then the notion of spanning subsets and then the notion of basis which then leads to the notion of dimension. Towards the end of this third module, we will also discuss the problem of determining the dimension of the sum of two subspaces in a finite dimensional vector space. So that will be module 3. Therefore perhaps the most important module in this course, we will discuss what are called as linear transformations. The notion of linear transformations which are absolutely fundamental in perhaps the whole of mathematics. We will discuss linear transformations, the definition of a linear transformation, examples, then two important subspaces that are associated with a linear transformation. The null space and the range space, we will look at lots of examples and we will also prove an absolutely fundamental result for linear transformations called the rank nullity dimension theorem. We will also discuss the notion of what is called as a row rank of a matrix, the column rank of a matrix and the equality of the row rank and the column rank of a matrix. So these things will be discussed in this fourth module and we will be covering these topics in lectures 14 to 18. In the fifth module, we will discuss the notion of the matrix of a linear transformation, the matrix of a linear transformation. This will be done in approximately three lectures, lectures 18 to 20. So in lecture 18 when we discuss linear transformations towards the end of the fourth module, we will introduce the notion of the matrix of a vector in a vector space and then from lectures 19 onwards, for a couple of lectures, we will discuss the notion of the matrix of a linear transformation where we will discuss also what is the matrix of the composition operation, composition of two linear transformations and what is the matrix of the inverse transformation. We will also answer the question as to how the matrices of a linear transformation corresponding to two different bases behave, okay. How are they related? The notion of similarity transformation, now this will also be discussed in the fifth module. In module 6, we will discuss the notion of linear functionals especially what is called as the dual space. These topics will be discussed in the lectures 21 to 25. So what is the linear functional then the representation theorem. We will be proving a representation theorem for a linear functional on a finite dimensional vector space. Then the notion of the dual space, more importantly the notion of a dual basis. Some numerical examples for constructing dual basis. We will also discuss what is called as an annihilator of a subspace. An annihilator is a subspace of the dual space for instance. We will also discuss the notion of the double dual space and then of course the double annihilator. We will also consider the problem of proving that a subspace is equal to its double annihilator under a certain identification. So these topics will be discussed in module 6, linear functionals and in module 7 approximately 26 to 29 about 4 lectures we will discuss the notion of eigenvalues and eigenvectors of linear transformations. So we look at examples of linear transformations and ask the question as to whether these linear transformations have eigenvalues, whether they have enough eigenvectors etc. What is a matrix formulation of such a problem? Then the diagonalizability. When is a linear transformation diagonalizable? What is the definition? And we look at some examples of matrices which are diagonalizable, some other matrices which are not diagonalizable etc. We will also look at one important characterization of diagonalizability in terms of the characteristic polynomial and the dimensions of the eigenspaces. The notion of a characteristic polynomial leads to the notion of an eigenvalue. So we look at characterization of diagonalizability in terms of characteristic polynomial and eigenspaces, the dimensions, dimensions of the eigenspaces. Whether the dimensions of the eigenspaces add up to the dimension of the domain of the vector space that we start with. We will ask this question. We will also discuss the relationship between a minimal polynomial and the characteristic polynomial. So there is a notion of a, there are at least two polynomials that one would like to consider for a linear transformation, the minimal polynomial and the characteristic polynomial. What are their relationships? What are the relationships when, what can one say about the minimal polynomial for instance when the operator is diagonalizable etc. So we will answer these questions. Towards the end of this seventh lecture, seventh module, towards the end of the seventh module we will also discuss a proof of the Cayley-Hamilton theorem for matrices. The Cayley-Hamilton theorem informally says that the characteristic polynomial of an operator is an annihilating polynomial of that operator. In the eighth module, in about three lectures, in this module, this eighth module we will discuss the notion of invariant subspaces and triangulability. So for instance what is an invariant subspace of a linear transformation? Then what is a T conductor of a subspace? The notion of triangulability which is more general than diagonalizability. Of course we will also discuss diagonalizability in terms of the minimal polynomial and independent subspaces for instance. In this module we will also discuss the notion of projection operators. Towards the end of this module we will also prove that projection matrices for instance are diagonalizable. So that will be the topics that are covered in module 8. In the next module we look at direct sum decompositions. This will be discussed in about two lectures. What is a direct sum decomposition of a vector space? Then what are the relationships between direct sum decompositions and projections? In fact we will show that there is a one to one correspondence. We will discuss a notion of invariant subspaces. We will recall this notion that was introduced in the previous module and then study characterization of diagonalizability in terms of invariant subspaces etc. One important result that we will prove here is a characterization of diagonalizability involving projection operators and direct sum decompositions. Of course we will discuss lots of numerical examples to illustrate the main results. In the tenth module which contains about four lectures we will discuss the notions of the primary decomposition theorem and the cyclic decomposition theorem. So this is covered in lectures 35, 38. So here we will discuss the notion of primary decomposition theorem which is essentially looking at a very general form of the characteristic polynomial like the primary, the factorization which is similar to the factorization of a number in terms of the prime powers of its factors. We will also discuss the notion of the Jordan decomposition theorem which is a consequence of the primary decomposition theorem and also another result called cyclic decomposition theorem. Now these results will be proved in these four lectures. In the next module we will discuss a notion of inner product spaces that is module 11. The notion of inner product spaces this will be covered in about four lectures. So what is the notion of an inner product on a vector space? We will look at several examples of inner product spaces then look at the notion of a norm on a vector space coming through an inner product for instance which allows us to generalize the notions of perpendicularity of vectors on the plane or the three-dimensional space. So orthogonality will be discussed orthogonality will be discussed consequently we will be discussing the notion of the Gram-Schmidt process of obtaining an orthonormal set from a linearly independent set. As a consequence of Gram-Schmidt process we will derive what is called as a QR decomposition of a matrix whose columns are linearly independent. We will also show that a finite dimensional inner product space always has an orthonormal basis. So these topics will be covered in these four lectures on inner product spaces. In the next module, module 12 we will study the notion of what is called as best approximate solutions generally best approximation that is about three lectures 43 45 the notion of best approximation. So what is this notion of best approximation in a inner product space? More importantly how does this translate into the problem of finding least square solutions for linear equations possibly inconsistent systems of linear equations. We will be using the QR decomposition that was studied in the previous module and using the QR decomposition we will obtain a solution. We will also discuss orthogonal complementary subspace given a subspace the orthogonal complementary subspace what are its properties. So we will discuss this what comes along is the notion of an orthogonal projection. So this will also be discussed in this module and how are orthogonal projections related to the notion of best approximation just to complete the circle. So these topics will be discussed in this module on best approximation. The next module is in the next two modules we will discuss the notion of the adjoint of an operator. The next two lectures the adjoint of an operator. So this will be discussed in lectures 46 and 47. So the notion of the adjoint operator some of its properties some examples and then given an operator on a finite dimensional inner product space what is the relationship of the matrix of this operator relative to an orthonormal basis and the matrix of its adjoint relative to the same orthonormal basis. So we will discuss this relationship. We will also discuss towards the end of this module the notion of inner product space isomorphisms and give a characterization of when an operator on a finite dimensional inner product space is an inner product space isomorphism. In the last module the 14th module we will discuss three important classes of operators on inner product spaces. Self adjoint normal and unitary operators. So this will be done in lectures. So this is the last part of this course. The topics that we will be discussing in this module are first unitary operators then normal operators and then self adjoint operators. Unitary operators, examples then what are the properties of the matrix of a unitary operator relative to a basis etc. We will discuss the notion of unitary equivalence of operators which generalizes diagonalizability in some sense and then we will switch to self adjoint operators and for self adjoint operators we will look at some examples both finite dimensional infinite dimensional and importantly prove what is called as the spectral theorem for a self adjoint operator. We will be requiring some properties of eigenvalues eigenvectors of self adjoint operators. We will discuss those and then prove the spectral theorem for a self adjoint operator. The third topic in this module is that of a normal operator. So we will look at examples again look at study some properties of eigenvalues eigenvectors and then prove what is called as the spectral theorem for a normal operator. For both the spectral theorem for a self adjoint operator and for the normal operator we will look at the matrix version. So matrix versions of these results the spectral theorem will also be presented. So these are the 14 modules that cover the topics that one would normally discuss in a first course on linear algebra. Let us move on to the actual lectures from the next lecture onwards. Thank you.