 So, let us try and take a closer look at what is happening when we say linear combination of these equations what is actually going on ok right. So, let me again write that down elaborately a 1 1 x 1 plus a 1 2 x 2 plus a 1 n x n is equal to b 1 and just you know because I am lazy I am just going to write the last one now instead of going to the second one a m 1 x 1 plus a m 2 x 2 plus a m n x n is equal to b m right. Now, think of this as a matrix all right what is actually going on when you are cooking up the m plus first equation is it not something similar to. So, I can erase this. So, I can afford to I will but of course, you cannot from your answer book. So, you will have to write it again I will just write this as x and this as b it is the same thing yeah because I am lazy and I can afford to I will do it I will let you write it down again. But the m plus first equation what is it tantamount to is it not tantamount to the following action. Now, is this not indeed the m plus first equation yeah what does that tell us it essentially tells us in the parlance of the language of matrices when we are pre multiplying we are combining rows. In fact, if you are fond of mnemonics you can just look at it as you know the word pre has r included in it. So, it is like rows right. So, pre multiplication or multiplication from the left entails taking combinations of the rows just like if I tell you that there is this 1, 2, 3, 5, 7, 8, 9, 14, 27 and this is beta 1 plus beta 2 plus beta 3 yeah is not that the same as post multiplication of the matrix 1, 2, 3, 5, 7, 8, 9, 14, 27 times beta 1, beta 2, beta 3. So, post multiplication is basically combining columns pre multiplication is essentially combining rows in terms of matrices that is the big picture. So, that is the first observation that we make that when we cook up the m plus first equation from the m equations essentially what we are doing is we are hitting on both the sides of the equality with this sort of a row vector if you would. Now, it then stands to reason that if you want to cook up more and more equations instead of just one equation if you want to cook up a bunch of other equations what should we do? If you want to cook up another equation what should we do to this? What should this row vector transform to what should it change to? Suppose I want to cook up 2 equations now from these m equations what should I do? I should add another row here below this whose coefficients will exactly tell me how I am combining in what linear combination I am considering these equations to cook up the second new equation right. So, in general please note this observation carefully it seems very obvious and trivial, but later on when you have the big results they are heavily dependent on our understanding or visualization of these things. So, make a note of how we are obtaining all this and now I am going to just straight away write the following. Suppose I have been given A x is equal to b this is system 1 again A is m cross n B is m tuple x the unknown is an n tuple and from this I do the following I write A hat x is equal to B hat or maybe bar because I have used the bar notation earlier where A bar is equal to m A and B bar is equal to m B with being of size m bar cross m make sense I have cooked up m bar new equations from the original m equations. So, for every new equation I have added one row which sits as a row of this matrix m and therefore it is of m bar rows and because I have combined m original rows they have been acted upon therefore it has m columns yeah please ask if this is not clear is this clear right. So, at least based on this what can I say that anything that satisfies any solution that satisfies this must also satisfy this, but we cannot say the other way round. And now suppose we let m bar equal to m that is the first step that we take and B invertible ok. So, we have cooked up an m which is a square matrix. So, from a system of original system of m equations we have cooked up another new set of m equations that is what it means m bar is equal to m. So, the number of equations has not changed in the new system of equations additionally I have been very careful to ensure that this m has an inverse. So, are you all aware or of how you can show I mean it is not very difficult that a square matrix if it is invertible then it is left and right inverse must be the same because of course left and right multiplications are not the same right, but if a square matrix is invertible. So, suppose you have m l the left inverse of m and m r is the right inverse we have to show that left inverse and right inverse I mean this is just an aside just because I thought it might be interesting or relevant because we are talking about inverses. So, how do you show that m l is equal to m r yeah. So, you can or you can hit the first equation on the left with on the right with m r or the second equation on the left with m l yeah. So, what happens let us do the for later. So, you have m l m m r is equal to m l i and now we use the property of so called we mentioned this property associativity yeah. So, earlier we had this now we are going to just switch this yeah. So, this from the first equation is identity. So, m l is equal to m r I mean sometimes these claims might throw you off and say oh how do we prove this it is really not that it is pretty standard this sort of proof right ok. So, you have a square matrix if it is left invertible it is right invertible the reason why I thought this is important is because later on we shall be dealing with matrices which can have a right inverse, but not a left inverse or a left inverse, but not a right inverse right when we deal with rectangular matrices we often encounter such situations right. So, it is important to know that in case of square matrices at least when we say it is invertible it is unambiguous what the inverse is whether you are seeking the left inverse or the right inverse it matters not right ok. So, let us say we have cooked up a new system of equations by choosing carefully an m that is invertible and square like so. So, what can we then say any thoughts on this? So, this is system 1 and this is system 2 yeah anything beyond that when I have made this special choice the converse is also true why is that so? Yes exactly. So, you see now because we have chosen this m to be invertible I might as well say from this equation that a is equal to m inverse a bar and b is equal to m inverse b and m and m inverse are both square invertible matrices sorry yeah bar thank you right no no no we have not defined it a priori no no no a and b were present a priori the a bar and b bar are contingent upon what m I have chosen there are construction. So, it is up to us as to what m we choose that is the reason we are focusing on what sort of m should we choose the point is if we choose the m to be square and invertible then we exactly land up with the situation where a see is a just a combination of the rows of a are those of a bar and the same vice versa the opposite right and therefore, we can say that we are actually constructed an equivalent system. So, the recipe it turns out for cooking up equivalent systems is to choose m the entire focus should now be turned towards choosing square non a square invertible m non singular m right if we can choose cleverly some square non singular m that gives us an a bar which looks nicer than the original a we have succeeded in our goal towards this end we shall now define certain special matrices and certain special operations right which we shall call. So, I hope I can erase this part at least yeah. So, we shall now define something called elementary row operations because we have now seen that the whole structure is predicated on how the matrices look and what we are doing to these matrices. So, it is perhaps worthwhile to invest our time into understanding what is being done to these matrices or what we can do to these matrices yeah. So, some special operations on the rows of the matrices that gives them the name elementary row operations and why are they special that is what we will investigate. So, the first elementary row operation is as follows scaling of a given or a chosen row matrix that means you have an equation set 2 x plus 3 y is equal to 6 you multiply it by 5. So, you get 10 x plus 15 y is equal to 30 just scaling of one of the equations any one of the equations right. So, that is one elementary row operation the second adding a scaled row to another ok. So, that means you have one equation and a second equation to the second equation you add say 7 times the first equation. When I am saying the scaling remember I am not talking about the trivial scaling which is like 0 it does not make sense if you are scaling by 0 you are not adding anything or if you are scaling a row by 0 it means you are eliminating that equation that is not something you want to do. So, obviously it is a non-trivial scaling that goes without saying. So, there is a second kind of elementary row operation and a third exchanging rows. So, the moment we have these three kinds of elementary row operations defined we would like to know what sort of an M matrix you see those objects that we were hitting this system of equation from the left width what sort of M matrices correspond to these kind of operations each of them individually and what is so special about these matrices that gives them this elementary the term elementary that we use. So, let us look at the first kind shall we. So, you have say a 11 a 1 n a M 1 a M n and let us say there is this i th row a i 1 a a i n. Now, you want to get from here to suppose a 11 a 1 n alpha times a i 1 alpha times a i n a M 1 a M n. So, what sort of a matrix do you think needs to act on this to result in this if you had just hit this with the identity matrix you would have gotten back the same matrix. But now if I choose a matrix which is very close to the identity but not exactly in the sense that there are once everywhere except at the i i th entry which is alpha. So, this is the basically M i i or let us call it E because we have called it elementary row operations. So, let us call this E. So, this elementary matrix if you hit it with this matrix do you see that you will get this right. So, this elementary row operation of the first kind is encapsulated by matrices elementary matrices such as these right. What can you immediately say about these matrices in view of what we have just discussed what is that important property of this matrix that we are interested in. It is always invertible because of course, this alpha is non-zero. So, the reciprocal of this alpha plugged in here is exactly the inverse of this matrix. So, it is quite readily apparent that this sort of an operation is an invertible operation right. So, at least this first operation we have convinced ourselves is captured by matrices that look like this and these matrices are always invertible by dint of their very structure. What about the second kind of operations? So, now we want to get to a structure well let us say this is a 11 a 1 n and then you have a say a k 1 a k n a l 1 a l n and finally, a m 1 a m n, but this you want to be plus alpha times or let us say beta times a k 1 and this you also want to be plus beta times a k n right. Now, what kind of a matrix are we talking about and I will take us here. So, essentially what we are saying is that we want the lth row to be a sum of the lth row original lth row plus alpha times a kth row yeah. So, what sort of a matrix do we need to hit this original matrix with so that this results exactly yes. So, in the lth row this is again also very close to the identity except that at one of the off diagonal positions and precisely the off diagonal position that your classmate has mentioned. So, what we will have is matrix that looks something like 1 1 1 all the diagonals are still ones, but in the lth row in the kth position. So, this position here will be beta where beta is basically the kth row and the lth position is equal to beta. Does everybody see this? If not please ask if it is not clear please ask I am ready to try and explain in a different fashion it is very important that you understand this the indices are reversed ah yeah sorry yeah it is the lth row and the kth column that is correct yeah thank you right. So, you are paying attention good. So, yeah but again about this matrix can we say something can we comment about its invertibility or the like is this invertible and precisely what would it would its inverse look like sorry just the negative of beta right. So, if you have a matrix like this this fellow's inverse would have the same structure with all the ones sitting in the diagonals and just instead of beta you have minus beta the way to see this is you are undoing the action of the former through the latter. So, initially you had added something to the lth row now you want to undo that action. So, you hit it with its reverse operation which is captured by just a minus beta over there. So, again it is a mode point as to whether this is invertible or not and this is good we love we love that because we want to precisely get our hands on a set of row operations that will be invertible and finally, hopefully also get us to a nice looking form right. So, this gives us hope that these two operations they add up what about the third. So, again I will take the liberty of using the blackboard, but you cannot. So, let us look at the third kind of operation there is what we are going to do is let us say there is i let me also put a j1 a jn and here I am going to just flip the positions. So, i is going to be in j's position. So, i is going to be on top and j is going to be below right. So, again the question is what sort of a matrix will capture this particular elementary row operation swapping right. Again it is you will see that the identity like structure stays very much intact only the jth and the kth positions. In fact, this is called a permutation this operation is exchanging it is called permutation the matrix itself that I am going to show you now is also called a permutation matrix because you are basically flipping two rows or two columns whichever way you view it of the identity matrix itself right. So, here is what it would look like. So, in place of the jth one you will have one here and place of the ith one you will have one here oh sorry I think I should just swap it like this yeah I mean you get the idea. So, in the jth position here you have it in the ith position here earlier you had i ith and j jth, but now you have i jth and j ith those two positions have ones and the diagonals become 0 instead yeah. So, that is the swapping there might be multiple other identities sitting here unity is sitting here I hope you get the idea and is this invertible too what is its inverse itself. So, this is a type of matrix that we also call idempotent yeah idempotent matrices again you can view it like operation you flipped once you flipped twice you get back to the same place again. So, you keep acting on it and even number of times you get back the same ordering you act on it and odd number of times you get that switching the exchanging right. So, whichever way you view it if you want to go about it as a matrix inversion problem that is one way if you view it as some operations that are happening on the rows of another matrix or matrix as we will see is basically nothing but a linear operator when we have more sophisticated tools at our disposal we will talk about that in greater detail yeah, but it is the operation of the matrix on another matrix that describes how the numbers are arranged and what they do. So, precisely this the same matrix. So, the common feature of all these three is that they give you matrix representations that are all invertible and now another observation that we shall make again something you are familiar with, but yes which one all is a premultiplication. So, that is why there are row operations everywhere we are doing row operations. So, it is a premultiplication yeah, because rows correspond to different equations we started with the fact that we are going to combine these equations and cook up new equations. So, every equation corresponds to a different row right.