 Hi, I'm Zor. Welcome to a new Zor education. I would like to start a new topic. It's called mattresses. Well, let me start from certain philosophical considerations. In many aspects of real life, we deal with certain elementary objects. And then we combine them together in a bigger object. For instance, we can deal with individual bricks and then we consider a house, which is built from these bricks. Now, we basically can stop thinking about individual bricks if we are talking about features of the house, like what's the price, what's the size, location, etc., etc. In mathematics, we do exactly the same thing in many cases. For instance, we can think about certain elementary objects, like points and lines on the plane, and then we combine them together into something more sophisticated, like a triangle, for instance. And triangle has its own properties. It's a bigger object, it's a complex object which contains some elementary ones. But we abstract out the elementary aspects of it and concentrate on the properties of the bigger complex object, like what's the area of the triangle, for instance, or what types of triangles exist, etc. So, in this case, very recently, we were talking about vectors. Now, vectors is just one step into a more complex world from the real number, let's say. So, the vector can be considered as an ordered set of real numbers, and it obviously has its own life, its own properties, its own characteristics. If a real number has basically only one important characteristic, its magnitude, whether it's a value or a quantity or something like this, vector has also a direction. So, this particular property makes it a little bit more complex. Matrices, which we are talking today about, are just another level of complexity which we build upon other more elementary objects. Now, let me go back to vectors, and in one particular lecture I was talking about independence of the scalar product of two vectors from the system of coordinates. So, if you remember, if we had a system of coordinates and one vector u was represented as two coordinates, let's say u1, u2, and another vector v was v1, v2, then there's scalar product, which is u1, v1 plus u2, v2. This expression, or dot product, this expression is independent if we are changing the system of coordinates. Now, changing the system of coordinates, let's say we are turning this system by an angle five. Now, my new coordinates, so instead of u1 and u2, I will have, let's say, p1 and p2. So, p1 would be equal to u1 cosine phi minus u2 sine phi, and p2 would be equal to u1 sine phi plus u2 cosine phi. Now, correspondingly, q1 and q2 would be exactly the same, but instead of u, I will use v. So, p1, p2 are new coordinates of the vector u and q1, q2, new coordinates of the vector v. And I was talking and actually I proved that this transformation of coordinates doesn't really change this expression. So, p1, q1 plus p2, q2 would be exactly the same, u1, v1 plus u2, v2. And that basically multiplied whatever is necessary, whatever the coordinates, transformation requires, and came up with exactly the same thing. Now, let's think about this particular transformation, transformation of coordinates, a rotation by the angle five in this case. Now, what's important? How can I characterize this particular transformation? And on the same token, I can probably think about any kind of a linear transformation where my old coordinates are participating with certain coefficients to transform into the new coordinates. Well, in this particular coordinates, transformation of coordinates, I can say that coefficients cosine phi minus sine phi, the sine phi and cosine phi, these four numbers presented in this table-like format basically define my transformation. So, what I can say is that my transformation, linear transformation in this case from one system of coordinates to another, is defined by this particular table and this particular table is called a matrix. In this case, it's two by two matrix, two rows, two columns. So, it's two-dimensional square matrix, square because it's a square, two by two, the same dimensions. So, this is basically a definition of the matrix. So, matrix is a little bit more complex entity than, let's say, vectors. I mean, vector is just a set of two, in a two-dimensional case, it's a set of two coordinates arranged in proper order. Matrix is a table of, in this case, two by two table of certain numbers which basically characterize the transformation. So, in this particular case, I'm using matrices as a way to describe the linear transformation of coordinates. I mean, that's not the only way matrices are used, but this is one of the, let's say, maybe first or maybe one of the original ways to view the matrices. So, matrices help us to define the linear transformation of coordinates. So, that's basically my first kind of introduction into the concept of matrices. Okay, fine. So, what's next? There are some other transformations of coordinates and something which is significantly simpler than this one. This is a rotation, right? And it's described by this particular matrix. Let's talk about a little bit simpler transformation of coordinates, which is scaling. So, let's talk about scaling. Now, if I have a vector, let's say, two-dimensional space, it's easier. And we have a transformation of scaling, what does it mean? It means that every coordinate actually is scaled by some factor. So, this is transformation of scaling. Now, what's the important properties? Well, there are actually quite important properties about transformation. First is it's distributive versus the coefficient of scaling factor. So, that's this, right? It's like this. I mean, it follows from the definition. It's also associative relative to scaling. So, instead of first multiplying by a scaling factor of L and then multiplying by a scaling factor of K, the result of the first one, I can actually multiply scaling factors and apply to a original vector. Obviously, we have something like one, scaling of one. So, it doesn't really change. Well, we also have a scaling of zero, which converts into a zero into a null vector, right? So, we have certain nice properties of this particular transformation. It's also linear transformation. So, my point is that as soon as we have defined a new entity called matrix, it would be nice to define certain operations with these matrices, certain rules which these operations should actually adhere, etc. And this is a model. This is a good transformation. So, I would like to have any linear transformation defined by a matrix I was talking about before, to have certain analogous properties, nice properties, if we are introducing a new object and certain operations with this object, but operations are in some how not natural and these I consider to be natural. So, if they're not natural, people are not used to basically deal with these type of operations. Then nobody probably would need these type of new objects. Matrices were introduced not just to conveniently record this linear transformation, also certain operations with matrices are needed and we will define them all. So, I would like to present this as a model. This is a goal which we will try to pursue by introducing certain rules of operations with matrices. So, what we would like to do is, we would like to multiply, to be able to multiply matrix by a vector. We would like to be able to add two matrices together. We would like to have some associative law when we are applying one matrix on the vector and then another. So, all these, we would like to have a matrix which after operating in the vector doesn't really change the vector or converts into a new vector. So, these are all properties which we would like to define somehow. And then one more thing to basically complete the picture. Here is what I mean. And that's a completely different area of mathematics which seems to be completely different. But using these matrix operations it would probably be very nicely falling into this category. Let's consider a system of linear equations. Well, let's have, let's say two equations with two variables. So, b1 is equal to a11x1 plus a12x2. I have two variables unknown, x1 and x2. a11, a12 are coefficients for the first equation and the second equation would be like this. Notice the indexing. It's actually a very convenient way to have double indices where the first one signifies the number of equations and the second one identifies basically a coefficient with x number in exactly the same way. This is x1, so this is a11, this is x2, this is first equation, second variable. This is second equation, first variable, second equation, second variable. So, consider this. Now, it actually represents the right side of it. It actually represents the linear transformation of the vector x1, x2 is linearly transformed using a matrix which can be presented in this tabular form. So, this is my matrix. I'm using two double lines on left and right to signify that this is a matrix. And it's transformed into b1, b2. Right? We have x1, x2 as one vector, so to speak. Now, we apply somehow this linear transformation defined by this particular matrix. Let's call it a. So, this is matrix a which is applied to this vector and we got b1, b1 vector, right? Another pair of coordinates. From one ordered pair, x1, x1, we got another. Now, let's talk about scaling. Remember, if we have v equals to k u, I can multiply both, considering k is not equal to 0. I can multiply both by k to the minus 1 which is 1 over k, right? And that would be equals to u, right? So, I basically resolve the u from v reversing my scaling. From k, I reverse to 1 over k. Well, that would be great if I can do the same thing with vectors. So, if my vector b is a result of acting by the matrix a on my vector x, and I use the dot here, although I did not really define what this is. It's an operation, so to speak. It's a matrix a applied to the vector x in this particular fashion. That's all I'm saying. But it would be great if I can apply to both sides of this equation another linear transformation, which I call a minus 1. And I will have the vector x. Now, why is it important? Well, very important because this is x now a solution of this linear equation. So, if knowing the matrix a, I can actually come up with another matrix, which in some way is an inverse to the matrix a, and apply this inverse matrix to the left part of my equations. I will get two variables. So, in inversing the matrix is equivalent to solving this equation. And if I find out that there is some standard procedure how to invert the matrix, then I can solve these equations very easily. So, instead of solving them using some relatively straightforward, but quite primitive and long way. Okay, let's resolve this for x1, then substitute it into this, and then I will have an equation only for x2. That's how I find x2. Now, what if it's three equations with three variables? Well, it's three steps. Well, instead, I'll just invert the matrix, which might be a challenge by itself. But at least it's one operation, invert the matrix. And then I apply this inverted matrix to the left part, and I will get my right part. So, this matrix notation helps to resolve linear equations, as you see. It's a completely different area. The algebra seems to be completely unrelated. So, the abstract level of introducing matrix and operations on the matrix, including inverting the matrix, for instance, might help in a completely different area. So, that's what real mathematics is all about. When we are dealing with a problem and we don't really know how to approach it in a nice way. And then we invent a whole new theory. And from the viewpoint of that theory, we take a look at our old problem, and all of a sudden it seems to be like some particular case of something which we, in that higher level theory, have already resolved in a very nice and convenient way. Well, that's what abstract mathematics is all about. So, this lecture is an introduction into matrices as just a new entity, which might, if we will properly define all the aspects, it might be very useful in many different cases. In cases of vector analysis, because it's basically a linear transformation of vectors, in case of solving the systems of linear equations, and many others as well. So, this is an introduction, and I just wanted to basically present you this new concept. Matrix is a new entity, and then we will learn about basic operations with matrix, with matrices and other cases. Well, that's it for today. Thank you very much, and good luck.