 So, there are, as I mentioned to you a little earlier, there are two ways of looking at matrices. The first is that it is a rectangular array of scalars. That is a simple way to introduce matrices, that is why I started with that definition. But as I mentioned, the more useful viewpoint is that it represents a linear transformation between two vector spaces. So, in order to understand that, we need to know what a vector space is and that is what I am going to define next. Are there any questions so far? Sir, you said we should perceive matrix multiplication as a linear transform. So, I mean, in no dimension, it is very easy to say that when we do a linear transformation, what happens? But in higher dimension, I mean, how should we ensure that this is a linear transform? Can you repeat your question, please? Sir, I was saying that, you said that, I mean, matrix multiplication we can perceive as a linear transformation. Yes. So, I was asking that, how can we visualize it? Like in lower dimension, it is very easy to see that this is a linear transform. But in higher dimension, let us say going from one to one, how can we see? Yeah. So, it turns out, I mean that we cannot visualize more than three dimensions. So, if it's two dimensions or three dimensions, I can kind of draw things or I can show you what happens in three dimensions and so you can visualize it. But there is no hope of visualizing a linear transform from, say, six dimensional space to eight dimensional space or 16 dimensional space down to 14 dimensional space and things like that. You cannot visualize it. So, it's a mathematical construction and you have to take it as such. But it is, that is what it is doing. It is taking a vector from 14 dimensional space and then mapping it to, say, 23 dimensional space, something like that. So, that's what it is doing. You cannot visualize it. So, I wanted to understand, I mean, what does it distinguish between the linear transform and the non-linear transform? So, how can we distinguish between these two transforms? Okay. So, this refers to how do we define linearity? Okay. So, I'll come to that in a little bit. For that, I need to understand this concept of vector spaces and how we define a linear transform between vector spaces. Okay. So, you do need to understand, we have to cover vector spaces before I can formally define what a linear transform is. But for now, I'm just saying that there are two ways to visualize or view matrices. One is a rectangular array of scalars. The other is that a matrix represents a linear transform between a pair of vector spaces. And the key point is that any linear transform, so I need to define vector spaces. So, there is an object which is called a vector space. And if I define two vector spaces and if I define a linear transform between these two vector spaces, that can be represented as one and only one matrix. So, there's a unique mapping or a one-to-one mapping between linear transformations between two vector spaces and the space of matrices. So, we'll come to that shortly. Okay, so let's start with vector spaces. So, in order to define a vector space, we have to start with a field. A field is a set of scalars and for the purpose of this course, we are only going to essentially focus on the real or complex field. So, that is the set of all real numbers or the set of all complex numbers. So, in the back of your mind, even though I write F here, think of it as a shorthand notation to say it's either real or it's complex. It's a set of scalars with two operations defined on it, plus and dot. And it's closed under plus and dot. That is, you take any two scalars and add them together. You will get another scalar which belongs to this field F. And you take any two scalars and multiply them together. That's this dot symbol. Then you will get another element that belongs to this field F. Both plus and dot are associative and commutative. There exists an identity element, both for addition and multiplication. And every element has an additive inverse. So, given any A belonging to F, there is a minus A which also belongs to F. And all elements except the additive identity, which is typically denoted by 0, have a multiplicative inverse and multiplication is distributive over addition. Again, this is a very formal sounding definition. But like I said, for the purposes of this course, just keep in mind that we are thinking about the real line or the complex plane and the usual multiplication defined in the real line or the complex plane or multiplication of complex numbers. So, there's nothing new here, but there is a formal way to define these things. I'm not going to deal with these too much, but I put these down mainly for the sake of completeness so that you know where these things come from. Now, a vector space. A vector space, I'm going to use either S or capital S or capital D, or capital V to denote a vector space. It is defined over a field F and it satisfies two core properties. If I take X and Y belonging to this vector space S, then their sum X plus Y also belongs to this vector space S. And this sum is defined as element wise addition. If I take X belonging to this vector space S and any C belonging to this field F, then C times X also belongs to S. And the scalar multiplication is defined as multiplying every entry of this X. And these elements of S are called vectors. And addition and multiplication, which are used here, satisfy some set of eight actions, which I'm not going to list here. But again, for the purposes of this course, let's just think of it as element wise addition. And multiplying every entry of this vector X by this scalar. So this X plus Y as defined here is actually taking a simple linear combination of these two vectors X and Y. A more general linear combination, if you are given vectors v1 through vn in each vi belonging to r to the m, this is the m dimensional real space that is a set of vectors with m real valued entries in them. And i is 1 to n, those are the n set of vectors. And if you're given scalars i equal to 1 to nci, then if I define a vector y, which is equal to the summation i going from 1 to n, ci times vi, that is called a linear combination of these vectors v1 through vn. We also write it often by stacking these vectors v1 to vn as a matrix. Then this matrix will be of size m by n, because each of these vectors are m dimensional vectors. And we stack the entries of the elements of this ci as a column vector c1 through cn. So this is n by 1. And when you take this product of these, this matrix vector product as I defined earlier, then that's exactly the same as doing summation i equal to 1 to n ci times vi. The moment we define linear combinations, we can define linear independence. So a set of vectors v1 through vn are linearly independent when summation i equal to 1 to n ci vi equals 0, if and only if, c1 equals c2 equals etc equals cn equals 0. It's important to take a minute and digest this definition. Again, this is something you would have seen in your undergraduate course. But one important thing I want to point out here is the if and only if condition. The if part is trivial here. Of course, if c1 c2 up to cn are equal to 0, then summation ci vi is always going to be equal to 0. 0 times a vector is a 0 vector. And so when you add up all the 0 vectors, you will get another 0 vector. This is a 0 vector here. So the if part is trivial. So really the crux of this definition lies in the only if part. That is, there is no other linear combination of these vectors v1 to vn that you can take and obtain the 0 vector. So graphically, the way to think about it is if I have a vector v1 like this, another vector v2 like this, then can I take a linear combination, scale this by c1, scale this by c2, add them together and then end up at the origin, get the 0 comma 0 vector. If I can do that, then these two vectors are linearly independent. If not, they are linearly dependent. It turns out that these two vectors are linearly independent. And this is something that should be obvious to you. And instead, if I take three vectors like this, now it turns out that I can always find a non-trivial linear combination of these three vectors such that I will end up at the origin. So three vectors in the two dimensional plane are always going to be linearly dependent. And so we say that a set of vectors are linearly dependent if they are not linearly independent. Again, continuing with the theme of linear combinations, the span of a set of vectors v1 through vn is the set of all y's which can be written as linear combinations of these v1 to vn. It turns out that this is a vector space. And again, this is something that you can try to show. It is very easy to show this. The point is basically that if you take two vectors belonging to span of v1 to vn, then the first vector can be written as a linear combination of vi like this. And the second vector can also be written as a linear combination of these vectors. And therefore, their sum, so if it was, if the two vectors were y1 and y2, y1 plus y2 can be written as the sum of these vectors with different coefficients ci. And therefore, that also lies in span of v1 to vn. And similarly, if you take, if you scale a vector y by some alpha, then that is the same as scaling each of these coefficients by the same factor alpha. And therefore, alpha times y can also be written as a linear combination of these vectors. And it belongs to the span. So it satisfies the two properties we said that a vector space should satisfy. And so the span of a set of vectors is actually a vector space. A related object is the range space of a matrix A, which is the set of all y's which can be written as linear combinations of the columns of A. So y can be written as A times c for some c in r to the n. This is also a vector space. So essentially, the span of v1 to vn is the same as the range space of a matrix whose columns are v1 to vn. And the range space of a matrix is the same as the span of its columns. A subspace of a vector space is basically a subset of a vector space. So you take a vector space and you throw out some of the vectors and you retain the others. But it should satisfy a property that this subset of vectors is itself a vector space over the same field. When it does that, then we call it a subspace. So for example, if I take r2, then the set of all vectors y belonging to r2 such that y2 equals 0. That is the second entry of y is equal to 0. This is a subspace. Clearly, if I take two vectors whose second entry is 0 and I add them together, the second entry cannot suddenly become non-zero. And so that also belongs to this set. And if I take a y which belongs to the set and I scale it by some alpha, then the first entry will get scaled by alpha. But the second entry, which is 0, will remain equal to 0. So that will also lie in this subspace. We say that a set of vectors v1 to vn span a vector space S if the span of v1 to vn is equal to this vector space S. Sir, can you please once again elaborate on the subspace part? So a subspace is nothing but a subset of the vectors in a vector space. With the additional property that it should itself be a vector space. And a vector space is one which satisfies those two properties that I showed you earlier. The sum of two vectors in a vector space should be in the vector space. And scaling a vector by a scalar, you should continue to live in that vector space. You can never leave. Yes. Okay, I often joke if the physical class, I often joke that vector spaces are like Hotel California. You can never leave. Whatever you do, these vectors, however they interact with each other, you will always stay in that vector space. Okay, so if v1 to vn span a vector space, then span of v1 to vn is equal to the set S, this vector space S. In other words, any vector in this vector space can be written as a linear combination of v1 to vn. And any linear combination of v1 to vn is lying in this space. So this is another small point I want to make about, see the span of v1 to vn is a set of vectors and S is also a set of vectors. And when you want to say two sets are equal, that is equivalent to saying if I take any vector in S that belongs to span of v1 to vn and likewise if I take any vector which belongs to span of v1 to vn that lies in this set S. So they are equal. When this happens, we call v1 to vn as a spanning set. Okay, of course, it means like I said this equality means that every vector in S can be expressed as a linear combination of v1 to vn. Okay, so I think we have reached here and the next concept I want to discuss is that of a basis which we will do on Wednesday. Any more questions before we close the class? Sir, can you please explain this range space once again? The range space is the same as the span. The range space of a matrix A is the same as the span of the columns of A. And mathematically, it's defined like this. It's actually the same as this definition here. To say that y is in R to the m, where y can be written as a linear combination of vi is the same as saying that y is equal to A times C, where C is a vector in R to the n. It has n entries. It has C1 to Cn as its entries. Okay. Sir, is it equivalent to the column space of the matrix? Yes. So that's a good point. This is also called the column space. Sir? Yes. Sir, could you explain a span? So you take two vectors, okay, or in this case, as defined here, span is the set of all linear combinations of these vectors v1 to vn. So in other words, if I take just to, again, I'm trying to avoid going to one and two dimensions because like I said, linear algebra is not limited to one or two dimensions. But that's all I can show you here on a whiteboard. But if I take only one vector in two dimensional space and I ask what is its span, it's basically this line going through the origin. Okay, that's all the vectors that you can represent as a linear combination of this one vector here. But if I take two vectors in the two dimensional space, then their span is actually the whole plane. As long as these two vectors are linearly independent, by taking different linear combinations of this, I can span the entire two dimensional space. If I take two vectors in three dimensional space, and let's say I take one vector here and another vector here, then these two in this three dimensional space, but they will not span the entire three dimensional space. Okay, it's the set of all points that are reachable by taking a linear combination or taking the sum of scaled versions of the two vectors. Sir, could you repeat the plane part? Your voice was not audible for a preformant. Yeah, so all I was saying is that if I take, so if anybody is able to see the three dimensional axis that I've drawn, I've drawn the x, y, z axis, and you're able to see it, please confirm. Yes, x, y, z. Yeah, so if I take two vectors, one vector along the x axis, another vector along the y axis, you can see that if I take all possible linear combinations of these two vectors, I will span the two dimensional plane defined by the x and y axis. There will be now no component in the z direction, so it will span a two dimensional subspace of the three dimensional space. Okay, and that is true. Even if I take any two non-coincidental vectors in the x, y plane, together they will span the entire x, y plane, but they will never have any component along the z direction. Every vector I take, which is a scaled version of the first vector, will have a zero as its z component. So specifically, if I take v1 equal to, say, v11, v12, 0, and I take v2 equal to v21, v22, 0, it won't be 0, 1, and 1, 0. Any linear combination I take of these two vectors, c2, v2, will always be of the form v11 plus c2, v21, c1, v12 plus c2, v12, v22, and 0. So this third component will always be 0. So it will always lie in the x, y plane. Sir, we can't see what you're writing. It will come in a minute. Okay, so v1 and v2 span the x, y plane. So Vishnu has his hand raised. Yeah, Vishnu, go ahead. Sir, sir, earlier, can you hear me, sir? Yes, yes, please go ahead. The thing is, earlier, you said that vectors can be represented as columns of matrix, right? Columns of a matrix are vectors. Yeah, columns of a matrix are vectors, right? So is this like, is it compulsory to use columns or can you use rows also? Yes, so. But in lecture, it is mentioned as columns, mostly. Yeah, so that's where they say, there are three types of people in this world. The kind who think of vectors as column vectors. The kind who think of vectors as row vectors. Okay, so that was a bit of a joke, but essentially a vector, as stated, could be a column vector or a row vector. The point is one of its dimensions, it is a, when you say a vector, we are thinking of a one-dimensional vector. That is, it has one dimension, which is where you have, say, n elements. And it's a string of entries written along that dimension. You can represent it either as a row or as a column. And in fact, we'll use both depending on the convenience. But definitely from, it is true that vectors are often, it's most common to think of vectors as column vectors. Okay, thank you. Yeah, so in fact, if I go back here in my definition of matrix multiplication, I used both. I used a row vector and I used a column vector. Sir, what are the third kind of people? That's the joke. Sir, one more thing. Yes. Sir, like your, sir, matrix is like linear combination between two vector spaces, linear transformation, right? Yes. Sir, like, sir, we have matrices like m by m, right? m by m or something like that. m by m, okay. Yeah, m by n or something. But m by n, okay. m by m by n, like there are three, right? Tensors. Can. So I'm not discussing tensors just yet. Okay, that's okay. The matrix by definition in this course is going to be of two, there are going to be two parts to it, m by n. That's it. Okay. So I will not be at least in the, for the most of the part of the course, we will not have Okay, okay. by n by n. Yeah, thank you. I'll need another, another course to teach tensor mathematics. Okay. So if there are no further questions, we'll stop here. Thanks for attending. Sir, Ashu has a question. Yeah, go ahead, please. Hello, sir. In this last example, where you explained the two-dimensional vectors along the x and y. So here we took these two vectors along the x's, x and y, but if we take these two vectors along a certain plane, I mean one vector along xy plane and one vector along some other plane, say it xy or yz plane, then would it be able to span the three-dimensional space? What do you think? Sir, I guess we must have another vector to span three-dimensional space. Precisely. So that is one of the things that we will show, which is that you cannot span three-dimensional space using just two vectors, no matter how you choose those two vectors. Okay. If I take two three-dimensional vectors, okay, I can always find a vector in three-dimensional space, which cannot be reached as a linear combination of these two vectors. It makes intuitive sense, right? Because if I take the three-dimensional space like this, it's hard for me to draw it here, but if I take some vector like this, another vector pointing in some other direction, these two guys together define a plane. Okay. And it's only guys, only vectors that sit in this plane that I can reach by taking linear combinations. There's always going to be a perpendicular direction, which is 90 degrees to both these guys. And anything that sits in this 90 degree direction cannot be reached by taking linear combinations of these two vectors. Yes, sir. Okay, sir, thank you. Welcome. Okay, so I guess we'll stop here for today.