 Okay, so good morning again, everyone. We'll start now. Yeah, so welcome to E2212. This is Matrix Theory. So before we begin, some general guidelines are, let's keep our videos off. I'll switch off my video in a minute. I turned it on so you can see me during the first class. I'll turn it off in a minute or two. And also keep your microphones muted. This will avoid noise into the class. And if you have a question, please raise your hand. But if I don't see your hand raised, just unmute yourself and you can start speaking. And I will answer your question. Okay, so before we begin, let's maybe discuss the organization of the course. This is a new experience for all of us to attend a class online. And so we'll probably have to be a bit flexible and make things up as we go along. But roughly, this is the outline that I've thought for this course. So this file itself has been shared with you in the teams. So specifically, I'll show it to you in my teams. So if you go to teams and you click on the class scheme and you go to files and then class materials. Then there are two things here. One is this file that I'm showing you, which is the course outline. And the other is the textbook by Haun and Johnson, which is the primary textbook I will be using for this course. Okay, and you will incidentally find posts and other things related to the course out here under posts. And some crucial announcements in the announcements tab so that you can have all the announcements related to the course in one place. Okay, so coming back to the course outline. So this is E 2 2 1 2 matrix theory. And in this course, basically we'll study the basics of matrix theory. And I'll also talk about some applications to engineering. This is the course webpage and typically I'll try to mirror all the announcements that are put up on teams on the webpage. But just to avoid confusion for you, all the announcements related to the course will be available on teams so that you don't really need to go out. Looks like I lost internet connection for a few minutes. Can somebody confirm if you're able to hear me now? Yes, we can hear you now. Okay, thank you. So incidentally, if if I lose internet connection, don't panic, just wait. Hopefully the internet will resume after a minute or two and I will be able to reconnect with you. Okay, so. Yeah, so coming back to the course outline. The two T is Chirag and Nagbushan. Chirag, you're here. Would you like to say hello? Yes, hello everyone. I'm Chirag. I'll be TA for your course this time. Yeah. In Nagbushan, are you around? So both are excellent students so you can ask your questions and doubts to them also. There are going to be two office hours, which is going to be Tuesdays and Wednesdays, 5 to 6pm. And problem session and solving session, which will be on Saturday 9 to 10am. So the formal material of the course will be covered during these classes. These are additional times where you can get some help for your course, but it's not mandatory to attend these sessions. But of course, if you have doubts, this is a good time to get them clarified from the TAs. And also the problem solving session, there just won't be enough time to do a lot of problem solving in class for this course. And so if you want to see some example problems being solved, you should attend the problem solving session. For the grading of this course, I currently have a tentative plan of having two midterms and a final in addition to quizzes and assignments. And all four parts will have equal weightage. I'm also debating whether to have one midterm and final and have a higher weightage for the quizzes and assignments. That depends on how easy it is to administer and grade an exam. So we'll decide that in a few weeks time. But meanwhile, this is the tentative plan that your quizzes and assignments will be worth 25% of your grade. And the two midterms and the final will all be worth 25% of your grade. Now, one thing I realized is that given the pandemic situation, it's necessary to plan for the possibility that some of you may fall sick. Unfortunately, during the time which we have fixed for the exam. And so this is going to be my policy. I don't want to have re exams. And so if you miss an exam because of health reasons, then you do need to submit a letter showing that you were indeed sick. And that's why you had to miss a test. And in case you miss one test, then your points from the other two tests will be used to prorate your marks for the test you missed. But if you miss two or more tests, then you will get a zero on the test, the tests that you missed. And we will prorate based on the tests that you've taken. But then your overall grade may be very poor. Homeworks are assigned roughly once in two weeks. These homeworks will not be graded and you don't need to turn them in. But we will have a quiz or assignment approximately once every week. Which will have a problem which one or two problems which are very similar to the homework problems. These will be announced in class and you'll have to turn in your solution within one or two hours after the class. So, and then there will be a series of assignments will choose the best 10 scores to determine your score on these homeworks. Textbooks, I have listed four textbooks here. The first textbook is by Haun and Johnson, it's matrix analysis. This is the textbook I will be following fairly closely. And the other textbooks, depending on the part for some part of the course, I will, for example, use Golub and Van Loon. Some computational aspects are covered better there. And Gilbert Strang is listed because it's a good undergraduate level textbook. If you find Haun and Johnson a bit difficult, it's a good idea to go back and forth between Haun and Johnson and Gilbert Strang. There are also a series of very excellent video lectures at an undergraduate level on linear algebra. So I very strongly encourage all of you to take time to go over these video lectures. And in fact, for the most part, I'm assuming that you are comfortable with the material in these video lectures. This is a graduate level class. So I will basically summarize things that you should know from your undergraduate linear algebra modules. Almost every undergraduate program that I'm aware of does have a part on linear algebra. So I'm assuming that you are aware of this and you are family clear with it and we will take, go forward from there. So we lost audio again, sir. Your mic is muted. So you're muted, sir. I don't know how my mic got muted. How long ago did I get muted? Just 10 seconds, sir. Okay. Yeah, so I was basically done. I was just saying that the last part is the course outline, which you can look at on your own. Those are the topics that we'll be covering in the course. So the first question to ask is why should we study linear algebra? There are two, there are a few primary reasons that I put here, but I'm curious to know if, so I guess I've already written the answers here. And one obvious reason is that this is required. So you're, for example, if you're an MTech signal processing student, then you have to take matrix theory, but it's also useful. I mean, after calculus and probability or in addition to calculus and probability, this is probably the most useful mathematics that you can possibly learn. It's also very beautiful. And I will try my best to give you a sense of that over the duration of this course. And it's a topic of active research in its own right. So building the background in linear algebra, if you are interested in doing research in mathematics, then certainly you need this background to even get started. And finally, it allows you to solve very complex problems and or prove very powerful results using simple ideas. So these are some of the reasons why you might want to take this course. Okay, so what is it, what is it about? So finally, if I had to distill down, you know, the contents of this course down to what it is all about, it's just about these two equations. AX equals B and AX equals lambda X. So from your undergraduate linear algebra, you will recognize AX equals B is what we call a linear system of or a system of linear equations. And the AX equals lambda X is the eigenvalue eigenvector equation. So it's really about understanding these two equations and everything that you can say about these pair of these, this pair of equations. And that's what this, this course is all about. So, but then it turns out to, you can actually say a lot about these two equations. And what I will in fact cover in this course is going to be a small sample of what you can say about these two equations. It still will not be anywhere we are being exhausted. So there are a few caveats I want to point out right off the bat. Yeah, so one thing is that in any mathematical course, during the class, when arguments are presented to you, it looks very simple. I can assure you of that or it looks fairly simple and you feel like you understand everything. But it's very important to spend time outside of class from day one. You should look at the textbook, you should look for other material, you should try to solve problems. The class notes are not going to be enough. And when you solve problems, you will realize that sometimes standard procedures don't work and problems end up requiring you to look for some special way to handle some corner cases. And in some ways it's also like learning a new language where we get comfortable making mathematical arguments. Now the textbook for this course, Horn and Johnson is a graduate level textbook. It's a fairly dense textbook. It's actually not easy to read. But nonetheless, it has a very extensive collection of results in the area. And one of my goals in this course is to get you to be comfortable with Horn and Johnson. Because it has so many useful results and tomorrow in your research, if you need more advanced results from linear algebra, you shouldn't have to hesitate to open the textbook and look for a result that you could possibly use. And that comfort is really what I'm, that's really my goal is to get you to be comfortable enough with the textbook that you can open it up and look at it whenever you need to. Okay, so let's begin. So we'll begin with a review of some basic concepts. Again, these are concepts that you should already know. And so if you're not comfortable with the things that I'm talking about now, then you should check whether this is really a course that you want to take or not. So a matrix is a rectangular array of symbols in the context of this course, it's always going to be real or complex numbers. So I can write a matrix A as a collection like this containing A11 as its first entry, A12 as its second entry, A1N as its nth entry in the first row, A21, A22 up to A2N, AM1, AM2 and AMN in the last row. And so this is what we call an m cross n matrix. And we write that this is in the space real to the power m cross n or complex to the power m cross n if depending on whether these Aij's are real valued or complex valued. So always when we write the i is going to represent the row index and j is going to represent the column index. Now we say A equals B if all entry wise all the entries of the two matrices match so all the entries should be equal. When you do A plus B you can only do it if the two matrices are of the same size and it's an entry wise sum of the two matrices. That is the ijth entry of A plus B is the ijth entry of A plus the ijth entry of B. Lambda is a scalar here it could be a real or complex number lambda times A corresponds to multiplying every entry of A with this value lambda. Here's a simple proposition A plus B is the same as B plus A that is matrix addition commutes and it's also distributive A plus B plus C is the same as A plus B plus C. Lambda times A plus B is the same as you first multiply A by lambda then you multiply B by lambda and then add them together. Also multiplying A by lambda 1 plus lambda 2 is the same as first multiplying A by lambda 1 then multiplying A by lambda 2 and then adding these two matrices together. Product also this kind of rule applies. Next matrix multiplication so I'll start by talking about vector multiplication. So if you have a row vector A1 to An and a column vector B1 to Bn then the product AB these are two matrices or two vectors that can be multiplied with each other. So this A is 1 cross N and this B is N cross 1 and when I multiply them together that's taking the sum of Ai times Bi I equal to 1 to N and so this is going to be a scalar. So with this we can define matrix multiplication. If I have two matrices A is of size M by N and B is of size N by P then their product is a matrix of size M by P and it is defined such that its ijth entry is equal to the product of the i-throw of A with the jth column of B. So I'll write this here Ai goes from 1 to P, 1 to row index so it goes to M and j goes from 1 to P. And matrix multiplication is very useful in many contexts. At this point I must mention that this is a strange way of defining the multiplication of two matrices. So for example one could have thought of taking matrices of the same dimension and multiplying them element wise. Or you could think about a matrix product as you take every entry of matrix A and multiply it by the whole matrix B. If you do that you will get a matrix when you multiply an M by N matrix with an N cross P matrix you'll end up with a matrix of size MN by NP. Okay so that kind of product it turns out we'll see that later also. It's called a chronicle product. When you take the element wise product of A and B which can only be done if A and B are of the same size then that is called the Hadamardt product. But this is the usual matrix product as defined here. It's a strange way of defining matrix multiplication and at this point the only small motivation I can give you but we'll see much more later is that it represents a composition of linear transforms. So as it turns out a matrix as I defined it earlier is a rectangular array of numbers but a more useful way to think about a matrix is to think of it as defining a linear transform. So a matrix A of size M by N is essentially defining a transformation from R to the N to R to the M and any linear transformation from R to the N to R to the M can be represented as a matrix A. And if you take that viewpoint then a matrix product AB actually corresponds to a composition of linear transforms. So if you for example start with a from a dimension P space that is you start from R to the P then multiplication by B corresponds to take going from R to the P to R to the N. And then if you go from R to the N space to R to the M space by using another linear transform A which is another matrix of size M by N then the joint effect of taking these two transforms one after the other can be represented by this matrix A times B as defined here. Another motivation I can give you is we will maybe much later in this course look at Markov chains and it turns out that if you look at associated with the Markov chain is something called a transition probability. And a Markov chain is defined by states S1 to SN and the probability that you will end up in state J starting from state I in the next step is represented as a matrix whose entries are Pij. Now if I ask what is the probability that I end up in state J starting from state I but not in one step of the Markov chain but after say P steps of the Markov chain. Then it turns out that this corresponds to taking the one step transition probability and multiplying it by itself P times so and that multiplication again is defined in this way as defined here. So think about it that this way of defining matrix multiplication is really not intuitive but it's useful in a variety of scenarios and that's why we define matrix multiplication this way. By the way among the caveats there is one thing that I wanted to mention which is that a lot of students I've seen have a tendency to think about two cross two matrices or three cross three matrices in order to prove results. So when they're faced with a result they would say let me take an example and then they take a two cross two or a three cross three matrix and show by example that whatever the statement they want to show is in fact true. Such a proof is not acceptable for this class. What we want is that if a statement does not say that it's valid for two cross two matrices only then we have to prove it in the general case. We cannot show it in a two cross two or a three cross three case and consider that we are done. Okay so this matrix multiplication as written here is not commutative in general meaning that in general a b is not equal to b a. In fact a b may be defined so here as I've defined it here a is m by n and b is n by p. So I can define a b but if m is not equal to p I cannot even define b a. So in general a b is not equal to b a. Okay here is another proposition continuing on a b times a matrix c is the same as a times b times c. In other words which matrices you multiply first and which one you multiply later doesn't matter but it is important to preserve the order of multiplication. That is you cannot switch the order you cannot do instead of doing a b times c you cannot do c times a b or some other order. Similarly a times b plus c is the same as a times b plus a times c and a a plus b times c is the same as a c plus b c. Notice that again in all of these we are preserving the order in which we are multiplying the matrices. So a times b plus c is not equal to b a plus c a for instance. Another very important matrix will be using in this course is the identity matrix. This is denoted by i and if this matrix is n cross n and where there may be room for confusion I may write i n to denote the n cross n identity matrix. It's the matrix that has once along the diagonal and zeros everywhere else. And it has the property that a times the identity matrix if a is m by n then a times the n cross n identity matrix is equal to a. And the m cross m identity matrix times a is also equal to a. Transpose taking the transpose of the matrix simply switches the rows and columns. So the ijth entry of a transpose is the same as the jith entry of a. You also define the Hermitian of a matrix or the conjugate transpose of a matrix where not only do you switch the rows and columns. So it ij becomes ji also take the complex conjugate of the matrix. So a plus b transpose is the same as a transpose plus b transpose. The transpose of a transpose is the same as a. And this is an interesting result and it's something that you can try to show. That is if you want to take the transpose of the product of two matrices that's the same as taking b transpose times a transpose. How would you show such a result you will take an ijth entry of a b transpose and then you would find the ijth entry of b transpose a transpose. By considering an a general matrix a and b whose entries are a ij and b ij respectively. And then you show that the ijth entries of these two matrices are matching. Another function that you can apply on a matrix. You can find this trace. This is for square matrices. So here it's an n by n matrix. So the trace of a is the sum of its diagonal entries. So another related result is that the trace of a plus b is the same as the trace of a plus the trace of b. This is obvious because the diagonal entries will add. And so if you want to take the sum of the diagonal entries, you can first take the sum of the diagonal entries of a. And then the sum of the diagonal entries of b and then add them together. That's the same as adding the two matrices and then finding the sum of the diagonal entries. When you multiply a by a scalar lambda, then every entry of the matrix gets multiplied by lambda. And so does all the diagonal entries and therefore trace of lambda a is the same as lambda times trace of a. When you take the transpose of a matrix that keeps the diagonal entries where they are, it only switches the off diagonal entries. The rows become columns and columns becomes rows, but then the diagonal entries remain the same. So the trace of a transpose is the same as the trace of a. It's not true for Hermitian because when you take the Hermitian, you're doing the conjugate transpose. So unless the diagonal entries are real value, the trace of a Hermitian is not necessarily equal to the trace of a. Trace of a b, so this is another interesting property that the trace of a b is the same as the trace of b a. For compact matrices, meaning matrices for which you can define both a b and b a. But considering that we are looking at square matrices here, when I define trace of a b, I'm assuming I mean notice that. If I want to look at trace of a b, it's not necessary that a and b should be square, but a b needs to be square because trace is defined for square matrices. So a b is square and b a is also square and it's such that you can find a b and b a are both defined. In that case, you can write trace of a b equals trace of b a. Again, this is something that is worth as a small exercise for you to try to show. And once again, the way to show such a result is to simply write out what trace of a b will be in terms of the entries of a b, entries of a and entries of b. Write out what trace of b a will be in terms of the entries of a and the entries of b and show that these two things will be equal.