 Hi, I'm Zor. Welcome to Unisor Education. We have been talking about matrices as a convenient way of representation of linear transformations. And right now, I would like to introduce a few basic operations on matrices, which definitely should be introduced since matrix is just a new object. We have invented this object and as usually after we invented something, we have to know what to do with this. So these basic operations are really quite trivial and they do not really represent the essence of linear transformations. They are just abstract formal, if you wish, operations which can be done with matrices as well as with any sets of numbers. And operations are quite trivial actually. So first of all, I would like to mention one more thing here. Mostly we will be dealing with square matrices of 2x2 size like this or 3x3 size, which is something like this. So these two types of matrices represent linear transformations of plane, 2-dimensional space and 3-dimensional space. Obviously matrices can be of any dimension. I mean, matrix is a table, right? So the table can have a certain number of rows and a certain number of columns. And it can be different, for instance, 25 rows by 75 columns. We will just probably spend a little less time on these abstract matrices and spend more time on square matrices of 2x2 and 3x3 size. However, whatever the operations we are talking today, they are applicable to all matrices of any dimension. So let's start. First operation is addition. Now, basically it's, again, not really related to matrices, it's related to any two sets of numbers. How can we think about addition of two different sets of numbers? Well, number one, the number of elements in these sets must be the same. And then we just add corresponding elements. They're supposed to be ordered, like first, second, third, etc. And we will just add first element of the first set to the first element of the second set. And that will be the first element of the result. 25th element of the first set, we will add to 25th element of the second set and that will be the 25th element of the result. So basically if we are talking about addition of two matrices, we are talking about addition of matrices of exactly the same size, the same number of rows and the same number of columns. And we will just correspondingly add them. So if you have a matrix like this, which is 3 by 2, 3 rows by 2 columns, plus matrix like this, also 3 rows in 2 columns. It's very important that the sizes are supposed to be the same. Then the result will be, okay, first row, first column, 3, first row, second column, 3, second row, first column, 6, 4 and 2, 6, 5 and 4, 9, 6 and 5, 11. So we are adding corresponding elements. The element at row i column j is added to row i column j, getting row i column j of the result. So that's basically the addition. It's quite obvious that this addition is commutative. Why? Because the matrices have exactly the same size and each element of the result is a sum of corresponding elements, but the correspondence is exactly the same regardless of how we put these two. So it's first column, first row, first column, and first row, first column, or 27th row and 17th column, 27th row and 17th column. So if we will add them in any order, the result will be exactly the same. So that's why it's commutative. It's also associative, exactly for the same reason, which means a plus b plus c is equal to a plus b plus c. This associativity follows from the associativity of the plain addition of the numbers. Since again the order is exactly the same, sizes are exactly the same, so it follows from the associativity of the addition for plain numbers. That's it for addition. I mean it's an abstract operation which again I don't claim it will play a significant role in our usage of matrices as linear transformations. However, you know, for completeness probably we should introduce this operation. Now by the way, there is an operation of multiplication of matrices and this operation plays extremely significant role in transformations, in linear transformations, but that would be the subject of the next lectures. Alright, now next again quite trivial operation is multiplication of the matrix by scalar number lambda in this case. Now, it's basically a multiplication of each element of the matrix A by a scalar at plain number lambda. So, if you want to multiply 2 by matrix 1, 2, 3, 4, then you will get 2, 4, 6, 8. So, you multiply this number by each of those guys. Now, again, since there is a correspondence between this matrix and this matrix, it's the same size, the same number of elements in each row in each column and each element is just of the new matrix is the old one times this particular number. Then it's quite obvious that if you will have two numbers, let's say lambda and mu times A, then you can multiply this way. Now, this will give you a new matrix and the new matrix is multiplied by the numbers will be, again, a matrix. It will be the same as if you will multiply the numbers by themselves and then the result will multiply the matrix A. So, that's very easy to see because again the numbers as they are multiplied are associative. Similarly, you can say that there is also the associativity as far as multiplication of the numbers, something like this. So, these are three numbers. So, you can multiply this way or you can multiply it this way or you can multiply it this way. It doesn't really matter because, again, these are all numbers and this is a matrix which has exactly the same size as the result and every member of the resulting matrix is just the result of multiplication by these numbers. Now, what's also interesting is now we can introduce the distributive law. But, again, it's only distributive as far as multiplication by a scalar. So, this where the addition of matrix is, as I was explaining, just corresponding condition of each element. Obviously, it is exactly the same as this. And this distributive law follows from the distributive law of plain addition because for each element of this matrix and each corresponding element of this matrix, that would be true. So, if you will take, let's say, element on the crossing of i's row and j's column of this matrix, it's supposed to be added to element of this matrix on the same row. Then the result would be multiplied and that would be the ij element of the result. But this would be exactly the same because the result of this summation is actually the ij's element of this matrix plus ij's element of this matrix. And that would give you the ij's element of the result, right? I use the same letter because eventually it will be the same. Now, what is this? The ij's element of this matrix is lambda times aij. And this matrix would be lambda bij, right? And now we can factor out lambda because for real numbers, this is, the distributive law is true. And that's why we see that this is exactly the same as this. And that's why we will have exactly the same result as this. This is actually a proof, by the way. I mean, it's obvious from certain philosophical viewpoint, but if somebody asks you to prove this particular thing, this is how you can prove it. So I'm using this particular notation for specifying the element which is on i's row and j's column of this matrix. So this basically is the way how you can prove by element by element for each i changing from number one to whatever the maximum row and for each j from again number one to whatever is maximum column. So this is a distributive law relative to addition of matrices. Now there is a similar distributive law relative to addition of numbers. And the proof is exactly the same. Let me just use this particular notation I have just introduced. So let's take the element, well, first of all the dimensions of the matrices are the same, right? Because dimension of the matrix A doesn't really change if you multiply it by number. It doesn't change if you multiply by another number and it doesn't change if you add them together. And again it's exactly the same as multiplication of A by some number. So the dimensions on the left and on the right are exactly the same. So let's check what is the element which stands at the row i and column j of the left matrix. Well, this is a multiplication of a number by a matrix, right? So the ij's element of the result of this multiplication is the same number multiplied by ij's element of the matrix A. This is a definition basically of the multiplication of the number. But this is the same as this. Because among numbers we know the distributive law is true. Now what is this? I can say that this is again by definition at an ij's element of this matrix. And this is by definition of the multiplication of the matrix by number is also the same thing, ij's element of this multiplication. Now if I have two matrices, the definition of addition is the ij's element of the result is a sum of these two elements. That's by definition of the addition. So this is by definition of multiplication by number. This is also multiplication by number. And this is the definition of addition of two matrices. That ij's element of the result is sum of ij's element of the components. That's basically the proof, right? So we have proven that every element which stands at rho i and column j of this matrix exactly equal to the ij's element of this matrix. That's why matrices are exactly the same. Okay, and the last operation on matrices is actually not exactly that trivial as these two. And this also will play a significant, well, more significant role when we were using the matrices as transformations. Now it's called transposition of the matrix. And let me just explain what it is on an example. Let's say you have a matrix like this. Now transposition is, and it's using the power notation with the letter t. The transposition matrix is, I will use every row as a column, and obviously every column as a row. So one, two would be my first column, three, four will be my second column, and five, six will be my third column. That's what transposition means, okay? So I just, I don't know how to say it. I transpose matrix. I should say revert, but it's not a reversion. It's basically transposition. Now if we are talking about a square matrix, transposition would be, now look at these two. They are symmetrical relative to the main diagonal, right? So whatever stands on the main diagonal of a square matrix doesn't really change the transposition. That doesn't really change during the transposition. And whatever elements on the top right corner go to the bottom left corner, exchange places with them. So it's like a reflection relatively to diagonal. With non-square matrix, you can't really say it's reflection, but basically the sense is exactly the same. Now let me, now since you understand what exactly I mean when I'm talking about transposition, let me just talk about exact definition. So let's consider you have a matrix A which has a dimension m by m. So m rows and m columns. Since I'm using rows as columns, now my dimension of the transposed matrix would be n times m, right? Since every row becomes a column and every column becomes a row, so number of columns becomes number of rows and number of rows becomes number of columns, right? So this is the result and indeed in this case, this is matrix 3 by 2, right? 3 rows by 2 and this is matrix 2 by 3, 2 rows, 3 columns. So this is number one. Number two, if you take an element ij of this matrix, by definition it's element ji of the original matrix. Right? Look at this. For instance, this 2 is 1, 2. This 2 is 2, 1. Second row, first column. 3, this is second row, first column. So the coordinate of this is 2, 1. Coordinate of this is 1, 2. So indices are changing places. So that's basically a definition, this is a definition of the operation of transposition. Now what kind of properties of the transposition we can mention in this particular case? Well, there are a couple of properties. Number one, which is kind of obvious, if you transpose the matrix twice, what happens? Well, you will return basically to the same thing. First you're using columns as rows and rows and columns and then you do it again, rows and columns and columns and rows. So you do twice exactly the same thing and you will get exactly the same thing here. Now, another trivial property is this one. So if you multiply a transposition, transposed matrix by a number, it's exactly the same thing as to multiply original by this number and then transpose. Well, because the multiplication doesn't really change the structure of the matrix, it just adds a multiplier to each its member. So the multipliers will just move together with original elements and that's why that's true. And similarly, addition also doesn't really change the fact. So first you add two matrices and then you transpose the result or you transpose the results of each individual matrix and then add them together. Obviously, we'll get the same thing. Now, as far as the proof of this, I don't want to go into the details. It's kind of trivial. So these are properties of the operation of transposition. Again, that actually, this operation will play some role for the role. As far as the previous two operations, multiplication by number and addition of two matrices, well, they have a really very narrow application in other cases, in other topics related to matrices. But probably it would be necessary just from the formal standpoint, from the abstract standpoint to introduce them. Well, that's it for today. And again, let me remind you that these are very basic operations, very simple operations. The most important operation on matrices is multiplication of matrices because it's really reflected the transformation character of the matrix and that will be the subject of the next lectures. Thank you. And good luck.