 Hello and welcome back to the NPTEL lectures and also to the course on Principles in Principles of Quantum Mechanics in the Department of Chemistry IIT Madras. I am Mangala Sundar from the Department of Chemistry and this is the second lecture where we will continue from what we did in the first one. In the first lecture I talked about the linear vector spaces, but I did not really reach the point of defining the vector spaces. We talked a little bit about the matrix representation of vectors and also the scalar products and simple operations. Now in this lecture we will have two or three parts. The first part would be to introduce to you what is known as the concept of the linear vector spaces. I shall only give you some definitions or axioms. I am not going to do anything formal that a mathematician would like to do, but as chemistry students we should be able to understand this language by seeing this over and over again in different contexts. And eventually if you are interested in there are other courses in NPTEL from the mathematics departments and mathematics professors which talk more formally about these concepts. So, let us start with the idea of linear vector spaces and write down some of these basic definitions. So, today's lecture is on the introduction to linear vector spaces and elementary matrix algebra. A linear vector space of n dimensions you have seen one and you have seen two and three dimensions without formal introduction, but the idea of the vectors and the kets and their components, but in in general in n dimension has a collection of vectors v i. In principle the number of vectors in the n dimension can be very large in finite, but if we talk about collection of vectors which are independent shall just define the linear independence in a minute. Then we have n such vectors and they are given by represented by the symbols v 1, v 2, v 3, v n. When we say they are linearly independent what we mean is that if we have an equation involving all these quantities in terms of summing them over and taking linear combinations i is equal to 1 to n, a i, v i is equal to 0. This equation has no solution other than the trivial one only solution that a 1 is equal to a 2 is equal to a 3 is equal to 0 is the only possible that is the vectors are such that they are independent of each other and they cannot be expressed in terms of the other. This is the basic idea with which we will be using the concept of linear vector spaces in n dimension because we do worry about both finite and infinite dimensional linear vector spaces. When we talk about particle in a box or harmonic oscillators or hydrogen atom and when we talk about the abstract vectors and wave functions wave functions when we talk about them you will see that there are infinite number of such vectors and so infinite dimensional idea is also important, but for keeping the algebra simple let us use what is known as a finite dimensional linear vector space. Now, what is the space? It is defined as follows for all vectors v we can express them as some combinations of these linearly independent basis vectors a i v i i is equal to 1 to n. Some a i's may be 0 or all a's may be non 0, but the example that you have is the arbitrary vector a if you remember which we wrote a in terms of a x, a y, a z column vector and in this case the v i's where v 1, v 2 and v 3 from last lecture the basis or linearly independent vectors were 1 0 0, 0 1 0 and 0 0 1 and you could write any vector a in terms of sum over i, a i, i if we write i is equal to x, y and z. So, this is an example, but this is general what are the properties of these vectors v 1 let me write v a plus v b if we write that should be equal to v b plus v a and that is an element of the vector space v. The vector space is the space which contains n linearly independent vectors and any arbitrary vector v which can be expressed as a linear combination of those linearly independent vectors. So, any combination any pair of vectors in that space is also an element of the vector space a constant is scalar constant multiplying the vector space is written as a v a that is also an element of the vector space a multiplying two such vectors v a plus v b will give you two new vectors a v a plus a v b. Then the association of the vectors isas given the associative law is as given namely v a plus v b which itself is a new vector in the space and combining that with v c is equivalent to the association v a with the combination of v b and v c with v a. So, you can see that the this is called the associative law does not matter the sum of three vectors in whichever way you associate them will give you the same result. There is a null vector only one unique defined by the symbol 0 such that v a plus 0 gives you v a for all v a and the null vector is also defined by the presence of for every v a a vector which is minus v a such that the combination of the vector and its inverse or in this case the negative vector since we are talking about the summation the inverse of this vector and the vector will add to give you 0. I have already mentioned that any arbitrary vector like this v a can be expressed in terms of what are called basis linearly independent vectors v i i is equal to 1 2 3 2. Let us complete to this with the corresponding definitions of what are called the dual vectors. We have already introduced the dual vectors in the last lecture for the ket vector all of these are kets all of these or ket vectors the dual vectors are given by the bra vectors or every v i there is a dual v i such that an inner product or a scalar product can be defined scalar product in the familiar notation of vectors that you are familiar with scalar products it is v i v i is equal to the square of the magnitude of the vector v i square or it is also called the square of the norm. It is not necessary that there is an inner product only with a vector and its dual it is also possible to define the inner product of any vector v j with the dual vector of any other vector v i and the product is in general a complex number c i j but in all our cases now we have been talking about real quantities and real vectors and therefore this will be a real number. When this number has specific properties the vectors will also be identified with specific properties such as normalization orthogonality etcetera. The definition of the scalar of a vector v i multiplied by a that is given by a star v i where a star is the complex conjugate therefore if we have the inner product like v i that is right v a instead of v i and write b v v plus c v c the scalar product is given by b times v a v b plus c times v a v c. Likewise if we do the scalar product of a v a plus b v b and take this inner product with v c then the result is a star v a v c plus b star v b v c. All of this I am introducing only as definitions or properties or whatever we neither have the purpose nor the intent or expertise to expand all these things in the form of a formal mathematical set of axioms and then the theorems which give some of these results and so on. We do not want to do that there are enough such courses which would give you the formalism here as chemistry and physics students who are trying to understand quantum mechanics and become operational in the use of quantum mechanics. Some of these things will be taken as the axioms and we will work with them. We have to be familiar in understanding them with what we already know or associating them with what we already know such as the two and three dimensional vectors that in real space we always manipulate. So, in that sense these things can be understood in the form of similar examples for two and three d some of which are given in my problem sets. Now let me complete this with the two defining quantities known as orthogonality and normalization and normalization or unit vector property. Two vectors v i, v a and v b are orthogonal if their scalar product gives you 0, v a, v b is equal to 0 and that is the same thing as the scalar product between v b and v a. A vector is normalized if its inner product or the norm of the vector v a, v a is 1. It is normalized and if a vector in a set if for example, the linearly independent set v 1, v 2, v n has members which obey this property for any pair for any i, j. If we have this property v i, v j is equal to 0, if i is not equal to j and is equal to 1, if i is equal to j, this property is called orthonormal property and the set of v i's are known as orthonormal vectors. We will not go further on this at this instant. We shall use these ideas again later on when we talk more about the matrix representation of operators and so on, but let me sort of leave this at this point of time and then go on to something which is also needed because we have to get to the quantum mechanics and the quantum chemistry quickly and then come back to this having done these things earlier we can come back and relate to them, but I do not want to expand to this beyond this point. Let me now pass for a break and start with the idea of elementary matrix algebra and in the rest of the lecture today I shall talk about one particular aspect of the matrix algebra namely that of finding the eigenvalues and eigenvectors which are special properties associated with matrices. These are fundamentally important in any version of quantum mechanics whether it is done for chemists or physicists or mathematicians and so on. Therefore, let us get to that part of that problem very quickly write down some matrices try and solve for their eigenvalues and eigenvectors and then relate them to the quantum mechanics and the quantum mechanical problems that we want to study later. So, let me pass for a break now and we will come back with the eigenvectors this is still the continuing this lecture continues with that part of it. We shall start with the matrix eigenvalues and eigenvectors. In the last lecture you have seen matrix representations of vectors and also some operators. So, I believe you are familiar with the addition subtraction multiplication etcetera of matrices. So, I assume that you are familiar with the basic algebraic properties and in this part of the lecture I am going to talk about one particular property namely the core invariance associated with the matrix. The invariance are called eigenvalues or characteristic values as they are translated from German specific values and these are fundamental properties of the matrices. So, if we take an arbitrary for example, 2 by 2 matrix A 1 1 A 1 2 A 2 1 A 2 2 the eigenvalue eigenvalues for this matrix are obtained by solving a an equation calling this as A solving this equation A times an eigenvector or a column vector x 1 x 2 is equal to a constant which is usually written by lambda donate I mean denoted by lambda x 1 x 2. So, essentially we are solving a set of inhomogeneous equations involving the coefficients A 1 1 x 1 plus A 1 2 x 2 is equal to lambda x 1 A 2 1 x 1 plus A 2 2 x 2 is equal to lambda x 2 or in a homogeneous form if you write that it is A 1 1 minus lambda x 1 plus A 1 2 x 2 is equal to lambda x 2. A 2 1 x 1 plus A 2 2 minus lambda x 2 is equal to 0 this is homogeneous and has a solution only if the determinant of the coefficient matrix namely A 1 1 minus lambda A 1 2 A 2 1 A 2 2 minus lambda if the determinant is 0. Then it gives you values for lambda you can see that it will give you a quadratic equation for lambda and the solution of the quadratic equation gives you the 2 roots of lambda and those 2 roots are called the eigenvalues. The eigenvalue equation for any matrix square matrix you will be concerned with that most of the time eigenvalues for any square matrix if you write A 1 1 A 1 2 A 1 n A 2 1 A 2 2 A 2 n and if you write A n 1 A n 2 A n n. The eigenvalue equation is given by x 1 x 2 the column of n variables x 1 x 2 x n is equal to lambda x 1 x 2 x n is equal to lambda x 1 x 2 x n ok. We shall see in a minute how to generalize this for all the values of eigen all the eigenvalues lambda. The way we have written we assume that there is one lambda, but just a minute before I told you that the determinant of the coefficients of the equations that we get out of it. The determinant has the lambda to the power of n and therefore in principle it can have in this case 2 solutions and in this case therefore there are n solutions. So lambda will have n possible different values not necessarily different sometimes they may be the same in which case we call lambda degenerate, but in general there are n values and how do we write this equation in general for a 2 by 2 first let me write that and then it is easy to see the general part. Now let us get to the 2 by 2 and solve the eigenvalue equation A 2 1 A 2 2 minus lambda the determinant is equal to 0 and you know that the determinant is A 1 1 minus lambda times A 2 2 minus lambda minus A 1 2 minus A 1 2 A 2 1 and that is equal to 0. Therefore, the quadratic equation that we have is lambda square minus lambda into A 1 1 plus A 2 2 then we have plus A 1 1 A 2 2 minus A 1 2 A 2 1 that is equal to 0 this is the same form as A x square plus B x plus A 1 2 plus C is equal to 0 that you have solved in the high schools with the roots x being given by minus B plus or minus square root of B square minus 4 A C by 2 A. If you relate to that you see that A is equal to 1 and B is minus A 1 1 plus A 2 2 and C is this factor A 1 1 A 2 2 minus A 1 2 A 2 1 which is the determinant of the coefficient matrix the matrix that you started with A A 1 1 A 2 2 minus A 1 2 A 2 1 it is a determinant of that matrix and therefore, we can write the roots in this case lambda square minus lambda A 1 1 plus A 2 2 minus sub plus A 1 1 A 2 2 minus A 1 2 A 2 1 is equal to 0 as lambda is equal to minus B which is A 1 1 plus A 2 2 by 2 A which is 2 plus or minus 2 A is still 2 and then you have the square root of B square which is A 1 1 plus A 2 2 square minus 4 A C which is 4 times A 1 1 A 2 2 minus A 1 2 A 2 1 and the result is A 1 1 plus A 2 2 by 2 plus or minus 1 by 2 square root of A 1 1 plus A 2 2 square minus 4 A 1 1 A 2 2 will give you A 1 1 minus A 2 2 whole square and then the other is plus 4 A 1 2 A 2 1 Now, you can see that if the numbers are all positive and real then you see that the roots are all positive and real because this will be positive and the square of any number any difference is still positive and therefore, you have real roots and also positive roots. You can get imaginary roots for example, if A 1 1 minus A 2 2 whole square is less than 4 A 1 2 A 2 1 and this is negative then you will have a plus or minus i times whatever. So, it is possible to have imaginary roots, but if you have imaginary or complex roots the complex roots for the quadratic equation will occur in pairs. So, this is the general solution for the Eigen values. Therefore, you have two Eigen values namely lambda 1 is equal to A 1 1 plus A 2 2 by 2 plus 1 by 2 square root of A 1 1 minus A 2 2 whole square plus 4 A 1 2 A 2 1 and lambda 2 is A 1 1 plus A 2 2 by 2 minus 1 by 2. So, you have two roots and therefore, for each root in principle you have the solutions given by A 1 1 A 1 2 A 2 1 A 2 2 x 1 x 2 is equal to lambda 1 x 1 x 2 and you also have the solution A 1 1 A 1 2 A 2 1 A 2 2 another x 1 and x 2 given by lambda 2 x 1 and x 2 and therefore, you want to distinguish between these two Eigen vectors x 1 x 2 here associated with lambda 1 and x 1 x 2 associated with the Eigen vector lambda 2. You do that by introducing this one right here as a convenient way that this one which let me color them if you are this one ok. So, you have lambda 1 associated with the Eigen vector column x 1 1 x 2 1 and likewise the Eigen vector associated with Eigen value lambda 2 being x 1 2 x 2 2. So, you do the same thing x 1 2 x 2 2 and it is 2 and 2 and again you see that this is a different Eigen vector. So, that we have two sets of Eigen vectors for A 2 by 2 matrix if the Eigen values are different lambda 1 and lambda 2 the two Eigen vectors can be easily calculated by simply solving the algebraic equation, but if the Eigen vectors are one and the same. For example, if you got a perfect square for the determinant equation that you had and there is only one root then you have what is known as a degenerate Eigen value. We will discuss both of that in the examples that the study after this ok. So, let us look at the Eigen vectors associated with these Eigen values now, but rather than solving the general equation let us do simple examples and say let us look at the Eigen value and Eigen vectors of a simple matrix like for example, 1 1 1 1 easy enough exercise you know what you have to do is 1 1 1 1 x 1 x 2 is equal to lambda times x 1 x 2 and we have just seen that we have just seen that lambda can be given by the general expression of a 1 1 plus a 2 2 by 2 which is 1 plus or minus a 1 1 minus a 2 2 whole squared plus 4 a 1 2 a 2 1 square root and therefore, there is a 1 by 2 times square root 4 which is 1 plus or minus 1 which gives you 2 roots namely 2 and 0 ok these are the 2 Eigen values ok. A mnemonic that is a rule to remember if a matrix A is given as a ij let me symbolize this we write this in the form of symbols a ij then the Eigen value equation is very simple that a 1 1 minus lambda a 1 2 a 1 n a 2 1 along the diagonals you introduce the minus lambda a 2 2 3 etcetera and then you take the determinant to be 0 or the determinant of what you can do is a ij minus delta ij lambda the determinant of this is equal to 0 this delta is known as the Kronecker delta I believe that is the number out here Kronecker delta and delta ij is 1 if i is equal to j is equal to 0 if i is not equal to j ok. So, this is a simple way to remember the Eigen value equation, but coming back to our problem 1 1 1 1 we have 2 Eigen values lambda is equal to 2 and lambda is equal to 0. So, if we take lambda is equal to 2 then the Eigen vector that we would obtain we call this lambda 1 and 0 as lambda 2 therefore, our vectors is x 1 1 x 2 1 is equal to 2 times x 1 1 x 2 1 and the answer is immediate that you have 2 x 1 1 here you have x 1 1 here. So, you have minus x 1 1 plus x 2 1 is equal to 0 and likewise you have x 1 1 minus x 2 1 is equal to 0 the same equation. So, you have got x 1 1 is equal to x 2 2 x 2 1 as the one solution. A homogeneous equation of course, in 2 variables you can only determine one of them in terms of the other one is going to be arbitrary. So, you either choose x 1 1 or x 2 2 arbitrarily as any constant then the other quantities are defined you remember that only ratios of these variables can be determined with respect to one variable this is true for any set of homogeneous equations. Therefore, how do we make these things unique? The Eigen vectors are chosen as normalized Eigen vectors. By that what we mean is that if the Eigen vector is written as x 1 1 x 2 1 the normalization process is keeping in mind the fact that these can be complex in some of the future examples the normalization refers to setting this quantity the absolute squares of these components to 1 the sum of the absolute squares sum of the absolute squares to be set equal to 1 which means that since you have the equation x 1 1 is equal to x 2 1 and if you set this to a then all that you do is the column is a a and therefore, you have a square plus a square should be equal to 1 or what you have is the absolute value of a is 1 by 2 2 which means in this case the Eigen vector is given by x 1 1 x 2 1 is equal to 1 by root 2 1 by root 2 plus or minus 1 by root 2 if we do not worry about the absolute value if we simply say what is a, but the point is whether you choose plus or minus you will have to choose the same thing on both x 1 1 is equal to x 2 1 for lambda is equal to 2 this is the Eigen value with which this Eigen vector is associated therefore, the phases of the Eigen vectors minus 1 by root 2 minus 1 by root 2 or plus 1 by root 2 plus 1 by root 2 the phases are not important and there is also these this has its own implications when we choose the phase of the wave function in quantum mechanics later on we will see that there is a condon and shortly phase convention which tells that arbitrary phase factors can be set equal to 1 and so on. So, the relative phase is important, but not the absolute phase of the Eigen vector what about Eigen value lambda is equal to 0 the equation that we have to solve is 1 1 1 1 x 1 2 x 2 2 is equal to 0 because the Eigen value lambda is 0 then you see that the algebraic equation is x 1 2 plus x 2 2 is equal to 0 which gives the solution x 1 2 is equal to minus x 2 2 again using the argument that x 1 2 squared plus x 2 2 squared be set equal to 1 has the answer that the column x 1 2 x 2 2 is 1 by root 2 minus 1 by root 2 now you see the relative phase between the two components that is important. So, you either choose 1 by root 2 minus 1 by root 2 or you can use minus 1 by root 2, but then plus 1 by root 2 it does not matter which one of the columns that you use both of the columns will satisfy the Eigen value Eigen vector equation. Therefore, the phase of the overall Eigen vector the overall phase of the Eigen vector is not important, but the relative phases between the components of the Eigen vector x 1 1 x 1 2 x 1 2 1 these are all called the components of the Eigen vectors the relative phases between the components of the Eigen vectors are important. So, let us summarize then the solution that we have namely for the matrix 1 1 1 1 the Eigen values we had where lambda is equal to 2 and lambda is equal to 0 the Eigen vectors that we had x 1 1 x 2 1 is 1 by root 2 1 by root 2 and here we had 1 by root 2 minus 1 by root 2 let me write this as minus and leave this as plus that is my choice there is no reason for that or maybe there is a reason we will find out. If we want to write the matrix 1 1 1 1 times Eigen vector is equal to Eigen vectors is equal to Eigen values all of them all of them times Eigen vectors all of them. Then we have to remember that whatever you write here in this equation that should be the same that should satisfy both the equations that we had earlier. Therefore, the correct way of writing the equations together is to write 1 1 1 1 and the column vector x 1 1 x 2 1 the first Eigen vector column the second Eigen vector column 1 2 x 2 2 as I mean they play they are placed adjacent they are placed next to each other and then what you see is that this equation is actually given by x 1 1 x 1 2 x 2 1 x 2 2 times lambda 1 0 0 lambda 2 this is the equation which can be broken down into two component equations namely matrix times Eigen vector is equal to lambda 1 times Eigen vector and matrix times Eigen vector 2 is equal to lambda 2 times the Eigen vector 2. If you want to combine both of them you will see that the matrix Eigen value vector is equal to the Eigen vector matrix times the Eigen values ok or the common notation for this is u and what you see is a u these are matrices is equal to u lambda. Therefore, if we want to write only Eigen values on this side you may multiply this on the left by u inverse which exists of course, u inverse a u is equal to lambda. So, what you have done is to write the matrix u inverse followed by the matrix 1 1 1 1 and the Eigen vector matrix u what you get is 2 0 0 0 this 0 is an Eigen value this 2 is an Eigen value. So, the process of finding the Eigen values and Eigen vectors of a given matrix from what is there in the last line is equivalent to diagonalizing a given matrix I will explain this in the next segment ok. So, let us continue that and complete this lecture with the following statement which we can verify namely that the Eigen values of the matrix that we determined and in general Eigen values of any matrix or related to the other two properties of the matrix one called the trace. The trace is the sum of diagonal elements of a matrix, trace of a matrix is equal to sum of diagonal elements of the matrix and that is equal to the sum of Eigen values that is a statement I am making and it is easy to see that in a minute. The determinant of a matrix is equal to the product of Eigen values we shall see that for this matrix 1 1 1 1 the determinant here is 0 the trace is 2 the determinant is the product of the matrix the product of the Eigen values lambda 1 times lambda 2 which is 2 times 0 which is 0 and the trace is the sum of the Eigen values lambda 1 plus lambda 2 which is 2 plus 0 which is equal to 2. So, it is easily verified ok all I have done is verified, but if you remember that the process of finding the Eigen values of Eigen vectors involved or led to what we call U inverse A U is equal to lambda matrix now. The lambda is a matrix which contains lambda 1 lambda 2 etcetera that is along the diagonal we did a 2 by 2 matrix, but if you did that for n by n you will get n Eigen values along the diagonals and the U matrix will be n column vectors the first column vector the second column vector the third column vector etcetera that will be the U matrix. Therefore, the process of finding the Eigen values and Eigen vectors led to our recognizing that it is the same thing as diagonalizing a matrix. Now, a property of the trace of a matrix is or trace of a matrix product product of matrices is that the trace is cyclically invariant. If the matrices are commuter I mean commuter cyclically then the trace of the matrix product does not change that is for example, trace of a b c if we do that where a is a matrix b is another matrix c is another matrix this is equal to the trace of b c a we have taken the cyclic order a b c and it is equal to trace of c a b cyclically invariant and likewise the determinant if you are taking of a b c the determinant has the following properties that it is equal to the determinant of the products of the individual determinants anyway because determinant is a number that you are talking about the trace is also a number and determinant also follows the same property namely determinant of b c a is equal to determinant of c a b. Then if we apply this to our matrix u inverse a u these are matrices then you see that the trace of u inverse a u is equal to the trace of lambda because this product is equal to lambda matrix, but then you see what this one is cyclically if we permute these matrices trace a u u inverse and of course by definition the inverse is such that this is trace of a times the identity matrix and that is equal to trace of a and you see immediately that the trace of a is equal to the trace of lambda and the trace of lambda is nothing but the sum of eigenvalues because lambda is a collection of the eigenvalues of the matrix along the diagonals and zeros everywhere. So, it follows obviously exactly the same way the determinant of u inverse a u is equal to the determinant of a u u inverse and this is of course identity the determinant of identity is one and so what you have is the determinant of a. So, you can see that this is the determinant of lambda matrix and determinant of lambda and determinant of a are the same lambda being just a diagonal matrix what is the determinant it is the product of all the diagonal elements and so this is the product of eigenvalues. These are fundamentally important properties in quantum mechanics whether you do chemistry or biology or physics or whatever it is if you do mechanics and if you are going to study the properties of atoms and molecules using the basic microscopic mechanics then the mathematics associated with that these properties have to be kept in mind ok. So, what I would therefore leave you with is an exercise to verify one our u matrix was 1 by root 2 the u matrix were the two columns that we did 1 by root 2 1 by root 2 and then I chose minus 1 by root 2 1 by root 2 ok. Now, the what is the u inverse for this matrix how do we find the u inverse one way is to do it the by the brute force method namely finding the inverse of a matrix using the co factors and the determinant if we do that you will get the following answer 1 by root 2 1 by root 2 minus 1 by root 2 1 by root 2 u is also in this case specially known as orthogonal matrix orthogonal matrices or those whose determinant is plus or minus 1 a special orthogonal matrix is 1 whose determinant is plus 1, but more importantly the orthogonal matrices are defined by this property that if they are u represented by u the transpose of the u is equal to the inverse t is equal to 1 that is u transpose is u inverse. So, the inverse of this matrix u this being orthogonal is given by its transpose which is dot. So, the first exercise therefore is for you to verify that u inverse a u gives you the lambda matrix ok. A is also known as Hermitian matrix, but for our purpose it is called symmetric matrix because there are no complex elements in the Hermitian matrix. If there are no elements which are complex or imaginary in a matrix then if it is Hermitian it is also symmetric that is the symmetric matrix is that the matrix and its transpose are the same a is equal to a t please see the difference a is not equal to a inverse by the way in this case the a inverse does not exist because determinant of a is 0 ok. So, please remember all matrices have eigenvalues it is possible to find all of them if they have special properties it may be difficult to find the eigenvalues if the matrix coefficients are somewhat of a peculiar nature and if the matrix is general numerically it may not be possible to find eigenvalues accurately, but all matrices of eigenvalues square matrices, but not all matrices will have inverses because in this case the determinant of a is 1 1 1 1 the determinant is 0. So, a does not have an inverse. So, please remember a is equal to a t is a Hermiticity or symmetry property u t is equal to u inverse is the property of unitarity or it is called orthogonal property and we make use of those in determining the eigenvalues and eigenvectors. In fact, it is the other way around eigenvalues and eigenvectors give us the u ok. Second problem that I would like you to do following this lecture is to obtain the eigenvalues and eigenvectors of the following matrices 0 1 1 0 it is also known as the Pauli spin matrix for x component and it has a special designation called sigma x Wolfgang Pauli used to the is extensively in studying the spin properties of a spin half system then you have 0 minus i i 0 where i is square root of minus 1. I mentioned Hermitian matrices Hermitian matrix essentially has a j i the coefficient of a Hermitian matrix is equal to a i j star the transpose complex conjugate element is equal to elements transpose complex conjugate. And this is a Hermitian matrix because the transpose of this gives you 0 i minus i 0 and the complex conjugate gives the original matrix. This has eigenvalues real eigenvalues this has real eigenvalues and determine the eigenvalues and eigenvectors of these two matrices as examples. We will continue this exercise slightly more in detail in the next lecture by taking some complex examples for a 3 by 3 and make some general statements and those statements along with the statements of the this lecture and the statements of the previous lecture from the core of the linear algebra basis that you require and therefore, I request you to go through these exercises and verify for yourself that what he said is it it makes sense to you. We will continue this in the next lecture and determine the eigenvalues of 3 by 3 matrices. Until then thank you very much.