 Okay, so we'll begin the last time we discussed three concepts basis dimension and linear transforms and we started discussing fundamental subspaces associated with the linear transform will you continue that discussion today so just to read it. Okay, we started with the range space. So the vector spaces associated with the linear first space. It's also known as the column space. So let's defined as R of a, which is the set of vectors why in R to the M. So this is a matrix in M by N and we are defining the vector spaces associated with the linear transform represented as a matrix. So, why can be written as a times X for some X in R to the N. Okay, it's the it's also the span of. It's a subset of R to the M. And we said that the dimension of this range space of a is less than or equal to the minimum of M and then. It's at most M because it's a subspace of R to the M or a subset of R to the M and it's at most N because a has only N columns and so you cannot span a dimension greater than N when you're using when you're considering the span of N vectors. This space is the null space of a. So this is N of a is a set of all vectors in R to the N, which map to zero under a. Okay, so this zero with an underline underneath it is the zero vector, but going forward I won't always draw the underline beneath it. You just know that if the left hand side is a vector, then the right hand side when I say it's zero, I just mean the zero vector. Okay, the null space is just a set of vectors that map to zero. So it's a set of X, such that a X equals zero. And if the columns of a are linearly independent, we know that the only linear combination of the columns of a that will get you the zero, zero vector is the all zero combination. The null space of a contains only one vector, which is the all zero vector. A very fundamental result in linear algebra is that this number N, which is the number of columns in a equals the rank of a plus the dimension of the null space of a. So these two dimensions together always add up to N. So if the rank of a equals N, which is true when the columns of a are linearly independent, then the dimension of the null space of a will be zero. But if the rank of a is less than N, then the dimension of the null space of a can be greater than zero, or it will be greater than zero. Yeah. Sir, if the null space contains the zero vector, what will your next dimension be one or zero? Zero. But sir, how can a vector exist without a dimension? That's why that's what I was trying to say. The thing is that zero, the zero vector is a point. Okay, and a point has no dimension. So the way to think about it is take the two dimensional plane. If I consider a line, then this points on this line form a one dimensional subspace of this two dimensional plane. But if I take just the origin, a single point, that has no dimension. It is zero dimension, but it has one point. If I take the entire two dimensional plane, then it has two dimensions. Is that answer your question? Yes, sir. Thank you, sir. This is also known as the kernel of a and written as. So the dimension of the null space of a is also called the nullity of a. And this result is called the rank nullity theorem. Okay, so before I define the other two null spaces associated with a linear transform, I need to define the notion of orthogonal complement subspaces. Sir, what is the kernel of a? Can you please repeat? It's the same as the null space. So aka means also known as. Okay. Yeah, the dimension of the range space of a should be equal to a minimum of m and n, not less than or equal to. So let me take a matrix three cross three. Okay. So what is the dimension of the range space of a here? Two. Come on. Oh, sorry, one. Yeah. One. One, sir. Equals one and min of m and equals three. Okay. So this is easy to find matrices for which the dimension of the range space of a is less than min of m and that is called a rank efficient matrix if you know it already. So that's when the rank of the matrix is smaller than the dimension, the smaller of the two dimensions of the matrix. Okay. So orthogonal complement subspace. So given a set of vectors. Yeah. Go ahead, please. And how do how do you say that as X is a. How do you how do I say that null space is what? R to the power of n. Sorry. The null space of a matrix a which is of size m by n is this is a collection of vectors that are sitting in R to the n because these X's they have to multiply a. So they're all of dimension R to the n. I didn't follow your question. Okay. Okay. Okay. I got it. Yeah. So, so to take a collection of vectors, all of them are in R to the m and suppose that n is less than or equal to m. That is, you have fewer vectors than the dimension of each of those vectors. Then we define as per to be the set of all vectors. Y, which is also in R to the m. Such that Y transpose X equal to zero for every X belonging to the set S. Okay. So it's a set of vectors that are orthogonal to every vector in the set S. Clearly this is a subspace. This is another small exercise you can show to take two different ways, which for which satisfy this, then the sum of those two ways will also satisfy this. And of course, alpha times Y also satisfies this. And so it is a subspace. And we can show that the dimension of S perp is at least equal to m minus n. And with equality, equality, if and only if these vectors a1 to an are linearly independent. So the dimension of the orthogonal complement subspace is at least m minus n and it's equal to m minus n if and only if a1 to an are linearly independent. So for example, if I take S to be equal to the set of vectors, say 1, 0, 0, and then 2, 3, 0, then S perp would be span of 0, 0, 1. Okay. Any vector proportional to this would be orthogonal to both these vectors. In this context, I would say and say when things like this get satisfied, we call these vectors orthogonal. So X and Y are orthogonal. It's also perpendicular if X transpose Y equals 0. This is for real vectors, but for complex vectors, we normally use X Hermitian Y equals 0. This is what is known as the usual inner product. Sir. Yes, please. Whether S and S perp will span three-dimensional space in this example? That's a good point. S and S perp together span the three-dimensional space. So hold that thought. We'll come back to it in a few minutes. And this is actually one of the, what you've just said is the basis for, in fact, this rank nullity theorem here. So the rank of A is the span of the columns of A and then the null space of A is actually the set of vectors that map to 0. And so they together their dimensions as equal to N. Okay. So this is in fact related to this. So I'll come back to that point in a few minutes. So another fact is that any set of non-zero orthogonal vectors are linearly independent. So mutually orthogonal meaning that I'll take pairs of these vectors and then find their inner product and I always get 0 for every pair. Why do we require N to be less than or equal to M in the first line that's visible? No, it's not required. I just said that for convenience here. So this is actually not required to define S perp. Okay. No. The reason I said that is just because it's easier to imagine an orthogonal complement subspace when each of these vectors are higher dimensional vectors than the number of vectors you're beginning with. Clearly, if you have N vectors and N is less than or equal to M, these vectors together cannot span R to the M. If N is greater than or equal to M, then it's possible that these vectors A1 to AN already span R to the M. Then if you look for what are all those vectors which are going to be orthogonal to all these vectors which are spanning R to the M, then you'll be left with only one point, which is the zero vector. Okay. And then S perp will be an L set. So it's easier to imagine S perp if you start out by assuming N is less than or equal to M. Obviously, I can say for example here in this example, I can add any number of vectors here. Okay. And still S perp would be span of 001. Okay. So now I'm in a position to define the other two subspaces. Sir, is a collection of vector subspace? Like is S a subspace? It's a collection of vectors. S here is not a subspace. See, I told you earlier that a collection of vectors is a subspace only if it's Hotel California. You can never leave. You add any two vectors in that collection, you will get another vector which is also in that collection. And you scale any vector in that collection, you will get another vector that's also in the collection. So in fact, you can see that in the field of real numbers, okay, you cannot have a finite number of vectors that form a subspace. Okay. Okay, sir. Whereas when you talk about things like Galois fields, then you can have a finite collection of vectors that span that form a subspace. Okay. Sir, basically a subspace is whatever linear combination you can get with some vectors, right? That's right. So that was the starting point for us to define the dimension and the basis for a subspace. Every subspace has a basis and the basis is the smallest number of vectors that are needed, that are linearly independent and span the subspace. Okay. Thank you. Yeah. So the first, the third fundamental subspace is the orthogonal, yeah, please. Yeah, I have a question. So can I say that if two subspaces are complements of each other, they have to be orthogonal or divisors? If so, by definition, the orthogonal complement subspace is or any vector in the orthogonal complement subspace is orthogonal to any vector in S. So at this point, I have not defined orthogonal subspaces. But that's not, the only reason for that is because that's not needed to define the orthogonal complement of a set of vectors. But you can also define an orthogonal complement of a subspace. And then that does have the property that any vector in the first subspace is going to be orthogonal to any vector in the second subspace. Okay. One thing you can probably see is that if I take the basis of a subspace and I find the orthogonal complement subspace of it, then the any vector in the orthogonal complement subspace is orthogonal to the subspace spanned by that basis. The reason being that any vector in the subspace can be represented as a linear combination of the basis. And any vector in the orthogonal complement subspace is orthogonal to every one of those vectors that are in the basis. And therefore, it will be orthogonal to any other vector which is spanned by that basis. So in fact, the third subspace that I'm defining here is the orthogonal complement of the column space. So that is denoted by r of a perp or it's also called the null space of a transpose. Okay. And formally n of a transpose is a set of all x such that let's write it more clearly x belonging to r to the m such that a transpose x equals 0. This is of course sitting in r to the m. So this is the set of vectors that are orthogonal to all the columns in a. And if there if this if an x is orthogonal to all the columns in a, it's also going to be orthogonal to any linear combination of the columns of a. And therefore it will be orthogonal to any vector that lies in the column space away. And the dimension of r of a perp is equal to m minus r where r is the rank of a. Okay. This is exactly the point that somebody just made that if you take a subspace like the column space and you take its orthogonal complement subspace, the two together have a dimensionality equal to the space that they are lying in, which is m. And so if the column space has a dimension r, then the orthogonal complement of the column space must have m minus r. And this is also called the left and the fourth subspace is called the row space. And this is the orthogonal complement of the null space of it. So the row space, it's denoted by r of a transpose is the set of all y in r to the n such that y equals a transpose x for some x belonging to r to the m. And yes, this is a sitting in r to the n. And the dimension of the row space is equal to what? So what can we say about the dimension of the row space or the orthogonal complement of the null space? N minus r. N minus r. N minus r. Rank. No. It's rank of this a. Exactly. Okay. It's the same as the dimension of r of a. Okay. So basically the dimension of the column space and the row space of a matrix are equal regardless of the dimension or the size of this matrix or even the rank of the matrix and or what kind of matrix it is. So this is often called this particular statement that I'll also write it like this dimension of r of a. This is also written as a row rank equals column rank. Okay. So this is this is true regardless of the size of a and this to me, this is maybe the first result in linear algebra that I'm discussing, which to me is not intuitively obvious. And I cannot give you an intuitive reason as to why this should be true. It's no simple argument I can make, which will convince you that the row space of a matrix and the column space of the matrix must always have the same dimension. So no matter how hard you try, you'll never be able to construct a matrix where the row space has a different dimensionality than the column space. Of course, when you in the next class, we'll discuss about the row reduced echelon form, which essentially does a series of elementary row operations. And the form of the matrix that comes out of these elementary row operations is such that you can see that the row rank will be equal to the column rank. And so there are ways to see it, but just by intuition of looking at the matrix, it's very hard to understand why this statement must be true, at least to me. So this is basically the range of the rows of a. Sir. Yes, please. Yeah. Sir, is orthogonality the only way of defining the complement of that subspace or is there any other way that complement of null space of a or row space of a can be defined? See, there are many other definitions of complement, but for defining these fundamental subspaces, we need to use this notion of the usual inner product and define orthogonal subspaces. So basically, this leads me to the fundamental theorem of orthogonality, which says that the null space of a matrix is orthogonal to the row space. So this follows directly from the way these subscripts that come out. Okay. And Y is instructing A transpose X. So if you think about it by definition, these are two orthogonal subspaces. And the second is that the column space is orthogonal to the left null space on R to the M. So basically the dimension of this plus the dimension of this should be N, the dimension of this plus the dimension of this should be M. So these four subspaces and these two theorems, these two points are actually collected together in what is known as the fundamental theorem of linear algebra. Sir. It basically says that yeah. So what is the left null space? I just defined it. It's right here. So the null space of A transpose or the orthogonal complement of the column space is known as the left null space. It's a set of vectors. So the way to think about it is if I look for all vectors X such that X transpose A equals 0. Now this is a row vector containing zeros in it. Then if I take the transpose of this, this is the same as saying A transpose X equals 0. Okay. But this part here is corresponds to multiply from the left by a vector X. And that's why it's called the left null space. It's a set of all vectors when you multiply by A from the left, it maps to the zero vector. Is that fine? Yes, sir. Yes, sir.