 the CI. So, we talked of an important theorem of linear variation. Incidentally, this particular theorem is of course nothing to do with the CI, it is related to linear variation. So, it is basically a mathematics theorem of mathematics. So, if you vary the wave function which is expanded linearly in a basis, which means the basis functions are known, these are the only ones which are unknown. So, that is what we mean by linear expansion. Of course, if you change the basis function, then it is no longer linear dependence because your basis usually contains exponential minus something times r. So, you have to change the exponent just like in hydrogen atom or r square, whatever. So, usually they are not linear parameters changing. So, obviously in that case it will no longer be called linear variation. Linear variation means the parameters which you are varying, they appear linearly in the expansion of the function. So, Cs are linear. If there is a C square or one term which is C square, it is no longer linear. I hope it is clear. So, I have a basis and if I had added one more term, let us say k square times some other kai, another known function, this would no longer be linear variation because k is an unknown parameter. I have to vary k which comes as a quadratic term. So, what we are talking about is only for linear variation. The theorem says that if I do a variation by normal way, that is, vary this with respect to the coefficient C i, then the result that you get is an eigenvalue equation which is H C equal to C e in a general form where C is the coefficient matrix which contains all the columns of the eigenvectors and e is the diagonal matrix consisting of all the eigenvalues of this problem. The same and this is something that we have already proved long back. I can again redo it that if I do taking this as a Lagrange multiplier that this let it be 1 and then you vary this coefficient, we have already proved it, we can directly prove it again. So, this is what you get. If you do the method of projection, the theorem says that instead of varying, if you simply do a projection which I did last time, so essentially start with this Schrodinger equation H psi equal to H psi and then put expansion of H psi C i phi i equal to E times C i phi i, then you project with one of the basis functions, one at a time specific phi j, then you will get phi j H phi i C i sum over i equal to E times C j. If phi j and phi i are orthonormal basis, if they are not orthonormal, here also results will change. So, that is unimportant. So, we are assuming that the basis is orthonormal both in both cases. Note that if basis is non-orthonormal, results will still be identical except that it is no longer the eigenvalue equation and you should be able to guess what it will be. The overlap matrix will come here just like we had in the Routhan equation Fc equal to Sc, similar things will come and you can then work on a non-orthogonalized basis. So, then you can see that this equation gives you some over i H j i C i equal to E C j which is again the same eigenvalue equation. So, this is the clear end. We have already proved that what you will get here is in the last class we have also proved this, that by method of projection we again did it. Method of projection is actually very quick, just one line proof that what I want to tell you that we can get the same equations. I hope all of you remember how we got this equation for the method of variation but those who do not I will just refresh the mind that you expand this quantity subject to psi psi equal to 1. So, what psi psi equal to 1 means in an orthonormal basis is it this means that sum over i C i star C i equal to 1. So, this expansion is very easy in orthonormal basis because you expand with a phi i dummy index phi j dummy index but phi i phi j is delta i j. So, you will get this but then of course psi i psi will be sum over i j C i star C j and the matrix element between i and j of the Hamiltonian between i and j. So, you have h i j or h j i does not matter how you do it. So, this is what it is. So, you have to now put a Lagrangian which is sum over i j C i star C j h i j note that whenever I write h i j it means phi i h phi j. So, that is normally the symbol. So, when I wrote here phi j h phi j i wrote h j i but it does not matter you can always change the dummy index I mean to suit it to this form immaterial but this minus some lambda times C i star C i minus 1. So, remember whenever you write Lagrange multiplier make the equation as something equal to 0 and then only write the left hand side that is the trick. So, this is what you will vary with respect to C i and lambda of course if you vary with respect to lambda you will get this back this condition that is trivial and then you vary with C i. So, you get one particular C k let us say you get h k j C j sum over j equal to lambda times C k. So, lambda becomes your E in that case. So, lambda time becomes your E. So, you can easily do this exercise. So, do with respect to let us say C i star. So, you will get so del l del C i star will give you sum over j C i star will be taken out h i j C j equal to lambda times 1 C i will remain only 1 C i will remain specific C i. So, that is the reason you normally do not like to write this C i we will like to write this as some k. So, this will become h k j C j lambda C k it is one and the same thing as long as you understand this is no longer a dummy index here. So, you get the same equation as here please remember this is just i j interchanged otherwise it is the same equation. So, I think we had already done this I just showed that the method of projection provides the same results and this is true in general as a why did I say in mathematics because essentially Schrodinger equation is nothing but eigenvalue equation. So, for any eigenvalue equation whenever you want to optimize the eigenvalue by a variation principle you will get the same result in the case of linear variation by either variation or method of projection. So, that is why in mathematics it is actually mathematical theory nothing to do with quantum chemistry per se we are simply going to use this in quantum chemistry is it clear. So, anywhere there is an eigenvalue equation you can use variation principle to get the eigenvalue. So, it can be used in physics engineering several problems and the theorems are all true. So, I think you should understand the generality. So, certain parts of quantum chemistry are very general we are actually using mathematics to understand certain principles and they can be used anywhere. So, with that what we will do now to derive the CI equations where of course please remember our phi i's are now the basis what are the basis in the configuration interaction the basis is the basis of slated determinants. So, they are n electron determinants this is an n electron function what we did was a general problem where psi and phi can be anything it can be one electron two electron does not matter but for our problem this wave function is an n electron wave function these are n electron slated determinants. So, we have to write the equations and quite clearly if there is n electron two you will get the same equations but to go through the principles of variation would have been more difficult of course now we can directly go to the equation but I will still like to write the method of projection just to show you the nature of this h ij matrix because the key thing now is actually to get this matrix that is the key thing. What is this matrix? This matrix is the matrix of Hamiltonian between the slated determinants now these are the slated determinants. So, all I need to calculate is a matrix of the Hamiltonian between slated determinants and diagonalize. So, the c i problem is extremely easy conceptually once you understand the mathematics that we do not have to go through the formula again but I will go through the proof again the derivation again but the point is that you have to construct this matrix and diagonalize. So, all the energy is and then you have all the other theorems of the linear variation that each eigen value is a upper bound to its own excited states to its corresponding states right the epsilon the the the McDonald theorem. So, remember the McDonald theorem all that will apply now. So, in that case c i is a very powerful method because remember when you did Hartree-Fock we are only talking of ground state we are only talking of improving ground state for perturbation but in c i we will get upper bound for all states. So, c i becomes a potentially good method for deriving excited states not just ground state. So, this is the first time we are talking of a method which is potentially can give you excited states. However, c i has lot of deficiencies particularly approximate c i which is what we will discuss not the full c i and that is where the c i c i has really not been a very good method as it turned out to be. So, we will discuss that later. So, let me go through the the derivation once more but this time I will do only by method of projection to show how this this structure of the matrix and then of course to evaluate this matrix. We already know how to evaluate the matrix right by slater rule depending on the two determinants either they are equal they are one occupancy difference two occupancy difference we should be able to write this later. But remember here the determinants can be anything any j any i s you have to be very careful in writing. So, that is a technical part when you actually write this later rules but we have all the what I want to convince you the entire tools that are required and its consequences have already been covered. So, in a way c i is very easy what we have to really do as a deficiencies of the c i. So, what we will now do is to go back to a c i problem and remind you that your psi was c 0 you can use phi or psi does not matter what you want to use then you have c a r psi a r and so on. When you when you presented the general problem our wave function was a very very easy we can write by one index. Now, what we are doing is that these are my phi i's but they contain other indices a r a b r s that is only to determine the determinant the basis itself with reference to Hartree form. So, that is why all these indices are coming but please remember all these indices together is what was my i here. So, a combination of a b r s was actually one of the i there. So, that is I just want to tell you that nothing has changed it is just that these are now necessary to track your basis itself. So, let us cut this here if I approximate and this is something I want to tell you now if I approximate my wave function just here then of course it is no longer full c i. So, it is no longer exact even in a finite basis. So, it is not even exact in the finite basis I should say other way around it is not even exact in a finite basis it is an approximate. So, there are two kinds of approximate one is basis approximation that I talked of another is the fact that I am truncating c i because even in a finite basis m c n is too large a number. So, to really do a matrix equation solution because then this Hamiltonian matrix will become m c n by m c n and you can very quickly find out what is m c n let us say m is 100 which is a which is a good basis or even 50 and n is only 10. So, you are talking of every modest electron like in a molecule like methane and you have 50 basis which is about the basis that even today the referees will always demand for any calculation and it is a 50 c 10 and please do that number you will see it is a huge number talking of 50 c 10 say a 50 factorial by 10 factorial 40 factorial fortunately helps you I had to cut lot of things but you still have 41 into 50 41 into and divide by 10 factorial please do that number and how quickly the number blows up. So, you can see I mean people do not realize what is this m c n it is a factorial expansion and and eventually it is simply not doable beyond a point. So, obviously we have to approximate and this is where now the physics comes in how do I approximate. So, obviously the first thing that you will take is the w x at because now I am not going to I am going to approximate based on a physics and I have now realized from m p 2 calculations as well as the synonyl glues articles that we described physically that the most important correlation comes from the pair correlation. So, the two electrons try to avoid each other. So, they get excited they get excited out of Hartley-Fock to form another determinant from another basis remember that excited is again not excited state we are still talking of everything for ground state. So, this is still a function of the ground state it should write size 0 eventually of course we will get all the excited states for a very different perspective in CI but it whether ground or excited it does not matter what I want to show that these determinants are also contributing to ground state. They will also contribute to other states eventually because you have a eigenvalue problem. So, obviously the if you have to take only one class of determinants apart from Hartley-Fock I will choose this that is the first thing because I have learned that then then you have a choice what will be your second class will you take singles will you take tributes singles you will not eliminate because I told you the Brillouin's theorem only shows that the Hamiltonian matrix element between Hartley-Fock and singly x at 0 and you will see that why singles will contribute if you have doubles before I do this let me first convince you that only singles is no good because of Brillouin's theorem. So, let us take an even simpler approximation which I now call it CIS where I write size 0 as C0 psi Hartley-Fock plus sum over AR CAR psi AR. So, that is all I just cut it I call it CIS it means configuration interaction including only singly excited determinants okay the S is an acronym which just says singles. So, I can now derive this the method of projection once again but actually it is trivial I can directly come to the solution but the point is what you will do is first write these as C0 psi Hartley-Fock plus sum over AR CAR psi Hartley-Fock psi AR note that I am just re-deriving okay so this is nothing new E C0 psi Hartley-Fock plus sum over AR CAR okay. So, I continue the CIS so what you do is to project with these basics right. So, first projection would be with psi Hartley-Fock psi Hartley-Fock itself is a member of the basis. So, I project to the psi Hartley-Fock. So, CIS continued so you have psi Hartley-Fock let us see what you get H. So, I am projecting the right hand side H psi Hartley-Fock times C0 right plus psi Hartley-Fock H psi AR times CAR sum over all AR. So, that is my first equation equal to psi Hartley-Fock is orthogonal to psi AR. So, I will only get E times C0 just as I should get okay E times C0 because J is now my phi Hartley-Fock. So, I just wanted to make sure that you get the same thing these are my H, I, J and then the second one would be project to the psi AR but since I am using AR as dummy index let me use the J as BS. So, another set of another set of virtual occupied and virtual orbitals V2 as excited. So, this is one specific determinant out of these basis okay the AR are dummy in this case. So, any particular BS so you will get again H psi Hartley-Fock C0 plus sum over AR psi BS H psi AR CAR right equal to E times C BS. So, that is the reason I wanted to redo it again because it is just that symbols are little bit different you should get E C J. So, just J is nothing but psi phi J is nothing but psi BS so you will get E times C BS that coefficient. Coefficient for psi BS which is C J there and that is also easy to see because if you project to psi Hartley-Fock again psi BS will become 0. Here all AR will get 0 except when A equal to B are equal to S. So, exactly that particular determinant that is the only one to survive and then the coefficient will become C BS is it clear it is exactly restating this problem but just in a different language okay this is my equations for which I have to get solutions and you can see my first term is E Hartley-Fock E Hartley-Fock C0 and then the second term is 0 because of Brillouin's theorem. So, I have E Hartley-Fock C0 as E C0 correct this is my exact energy okay whatever E0 or whatever E0 you can say is my exact energy. Now note that without doing anything for the second equation I already have a trivial result my exact ground state is nothing but Hartley-Fock divided by C0 on the left and right hand side. So, my ground state energy at least does not improve okay I can show how by matrix part because this is where the matrix part will come the rest of the part will come but clearly from the first equation because of Brillouin's theorem I have E Hartley-Fock equal to E0. So, at least for the ground state there is no improvement and that is the reason CIS is never practiced for ground state but what about the other states because it is an eigenvalue problem. So, I can get the other excited states. So, that will of course come because of the second equation. So, I will get this so I will I can write this now show this by a matrix structure. So, my matrix of Hamiltonian will look like the following thing so CIS. So, the first part will be E Hartley-Fock right. So, that is your first part then the rest of the HGI will be with psi Hartley-Fock and psi AR. So, these are all 0. So, this entire row of Hartley-Fock to singly excited will be 0. Similarly, when I come here this part is 0 it is just a complex conjugate of this term. So, this is also 0. So, this in a way can be written as all columns are 0. Note that this is a column of psi Hartley-Fock this is the column of all singly excited determinant this is the row of psi Hartley-Fock and these are. So, this is Hartley-Fock and these are all psi AR. So, I have just rearranging the matrix element. So, this is basically matrix element between psi Hartley-Fock and psi AR. This is matrix elements between a particular psi Vs with psi Hartley-Fock and so on and this is the main part which is now new in CIS which is basically represented here. This is a matrix element of the Hamiltonian between one singly excited to another singly excited and they will of course survive. So, let me call this block HSS it just means between two singly excited determinant. So, this is a block this is no longer one term because there is a set of singly excited determinant there is a set of singly excited determinant. I am of course projecting with one particular Vs here but I have to do one by one to get the eigenvalue structure and then I have C0 which is the coefficient that I am looking at and then I have all the C ARs. So, entire C AR whose number is exactly same as this number m by m number. So, all the singles I will get equal to E times a particular coefficient. If I write it in this manner of course this would mean that depending on what is my rho here I will get that particular coefficient. So, in a simpler way the way it is written is that you actually write this as a unitary matrix for all roots and then you can write this way but if I write for one column also does not matter. So, basically have E C0 and all C AR. So, depending on which rho you pick up for matrix multiplication you will get only that number I hope all of you are familiar in matrices I am telling you again. So, all matrices you have to be very familiar. So, we have already discovered that if I multiply first rho times first column and that is what we have done here first rho times first column you get only E times C0 and E is nothing but E hat or E naught is nothing but E hat. So, that is the first equation which is trivial. So, ground state does not improve. However, if I take the other rows let us say second row, third row etc times a column then you start to get an equation which will now give you other roots because you get a full equation. So, basically I have to diagonalize this matrix essentially I am diagonalizing this matrix and you can see this matrix has a very nice structure that the first row and the first column are decoupled from the rest. So, the first row and first column has a number E hat refock but the rest is 0 here, rest is 0 here. Such a matrix is called block diagonal matrix I think we have introduced it before but I want to again restate it is not a diagonal matrix but it is a block diagonal matrix. So, this is a matrix of two blocks one is heart refock another is singles between these two blocks it is diagonal. Within this block of course it is not diagonal. So, this eigen value problem can be easily done by diagonalizing the two separate blocks. So, if I have a block diagonal structure of a matrix. So, for example, give you an example of let us say a 4 by 4 matrix. So, I have a block of 2 by 2 and 0 another block of 2 by 2. So, these are the blocks which exist block 1 block 2 and I have to diagonalize the entire block then instead of diagonalizing 4 by 4 matrix you diagonalize x 1 2 by 2 and x 2 2 by 2 this is much simpler because you are diagonalizing 2 by 2 matrices instead of 4 by 4 matrix I hope all of you know this that diagonalizing a 2 by 2 matrix 2 2 by 2 matrices is much cheaper than diagonalizing 1 4 by 4 matrix and that is very important for you to know that diagonalization is the CPU or whatever is proportional to n q not linear. If it is linear then you would say what do I get I am doing 2 times 2 by 2 instead of 1 4 by 4 but unfortunately it is extremely non-linear. So, if it is 4 by 4 it has a pre factor and it goes as 4 cube if it is 2 by 2 it goes as 2 cube. So, then 2 times 2 cube is much less that is 16 4 cube is 64 right. So, this is a very important part in all the computation and I think one thing you have to remember that the CPU computer time for a step of diagonalization goes as n cube just remember this. So, if you diagonalize a 2 by 2 matrix and a 4 by 4 matrix you will require 8 times more computer time because it is 2 times more. So, the computer time will become 2 cube is it clear and that is the reason this is much cheaper than diagonalizing the 4. So, the recognition of a block diagonal structure of a matrix is a very very important part. In fact, this is something that you will later learn in the group theory course symmetry course that we use symmetry of the basis phi i phi j or whatever such that in a CI problem I can block diagonalize because the Hamiltonian matrix Hamiltonian matrix element between the two basis of different symmetry is 0 just like here it is 0 because of Brunner's theorem. So, if I have a phi i a one symmetry phi j of another symmetry we will not worry about what is that there are representers in irreducible representation etcetera then the matrix matrix element is 0. Once the matrix element is 0 you can quite easily get a block diagonal structure and that is the reason group theory is very very important it simplifies like but I think this is an example of that that because of the entire block of psi ar does not couple with psi Hartree-Fock I get a block diagonal structure. So, I can get downstate energy directly as a Hartree-Fock and then I just diagonalize HSS. HSS is not 0 of course you know there is a structure of HSS we will discuss that. So, if you diagonalize this you will get rest of the eigenvalues and they will be your excited states remember you will get excited states. So, CIS it cannot is not used for ground state I hope now everybody can defend why because the ground state does not improve Hartree-Fock remains Hartree-Fock but it can be used to get excited states because the eigenvalues of HSS block will give you excited states of course you will argue that the states are not good that is a different matter because I have not used doubles or triples I have made a very serious truncation that that only singles. So, results will not be good but at least you will get a results in fact many times in the programs Gaussier or games when people use a very large molecule to get a rough estimate of the excited states this is what is done CIS but CIS is never done for ground state and I think that is the physics that you should remember that the ground state will not improve because of this of this coupling terms being 0 and that is because of Brillouin's theorem. If I would have used non-Hartree-Fock orbitals then of course it would be a completely different matter because there are no Brillouin's theorem is there. So, as long as I am building my determinants out of a Hartree-Fock orbitals and virtual orbitals is orthogonal to the Hartree-Fock orbitals I will have Brillouin's theorem and then ground state will never improve by CIS. So, to get an estimate of first estimate of the ground state however bad it is you must have doubles that is that is very important to realize only singles will not help. The question is do you want only doubles or you want singles and doubles of course singles and doubles will be better than only doubles because more the merry. So, you can get by direct solution of the Fock operator or Routhan equation or you can have some other orthogonal basis which you know are orthogonal then I do not care how. So, Hartree-Fock orbitals are separate after that you can have a separate set of virtual orbitals they need not come from the equation in fact that is an interesting part how will you get it is a different matter but if God gives you no problem you can leave with that but of course this is the better way to get it anyway once you have a Routhan equation. So, the point is the CIS is also very useful to obtain excited state and these excited states also have a variational principle that they are upper bounds just as we said for the McDonald theorem. So, McDonald theorem anyway holds for all variation I do not care how many basis you use. So, they are very nice but then we are interested in the ground state. So, we have to look at a somewhat better method. So, the first method but before I do that let me let me just analyze the HSS block the block of the Hamiltonian matrix element between the singly excited and WX I seen two singly excited. So, essentially this block has psi bs H psi ar for some ar some bs which can be arbitrary now. Now clearly you can see that at all times you can use later rules note that when ar is equal to bs you should use which later rules the first one because that is an average value though it is no longer in the Hartree-Fock but I hope you will be able to do that. So, for example if I have a two electron problem my Hartree-Fock is let us say 1, 2 and one of the excited determinants is 1, 3 another can be 1, 4, 2, 3, 2, 4 these are all excited determinants right and so on depending on how many virtuals are there let us say I have only two virtual spin orbitals 3 and 4 I am talking everything in terms of spin orbitals incident then what I am saying is that if I have one singly excited determinant this is my Hartree-Fock if I have one singly excited determinant here another singly excited determinant there which is identical then I am going to use simply my previous rule 1 but then rule 1 we always use for 1, 2 but now do not get confused rule 1 is general it just says whatever are the occupied orbitals in the determinant right so this will have become h11 plus h33 for example the first term and then you will have rest of the anti-symmetrize matrix you can actually write them down that by i less than j ij ij ij anti-symmetrize so you can write them down very easily and of course then later on when you do spin integration depending on whether this is a up spin down spin you can say if there is a coulomb or exchange I hope you remember that that comes much later when I do spin integration that I will say that for parallel spins I have coulomb as well as exchange for anti-parallel spins I have only coulomb I hope you remember we have done this please revise all this for the end time the entire thing will come it almost looks like ages when you did this though I have done it even in 4 to 5 actually those have attended so that comes in spin integration so coulomb and exchange of integral that we talk of is actually in terms of special orbitals in spin integration right now I have a general form which is half ij ij anti-symmetrize which can be written as i less than j ij ij so in this case it is very easy because there is only one term so I can very quickly write this as 1 3 anti-symmetrize nothing else is there and no half half will take care of the fact that there was a 3 1 3 1 which I did not write so and 1 1 1 1 1 3 3 3 0 so this is just that just to show you this later however there would be terms which is ar and bs are different now there are lots of deep possibilities for example I have a coupling between 1 3 and 2 3 for this I am going to use rule b right because they have only one occupation difference and the result will be simply 1 h 2 and then there is a another term for the 2 electron integral correct on the other hand I can have 1 3 and 2 4 then I have to use rule c because both are different it is a doubly excited and there will be one only 2 electron integral which will survive between 1 3 and 2 3 okay so please even to construct h ss you have to be careful about which rules to apply and how to apply okay do not start writing the formula for heart reform because that is something that comes out of a memory all the time please understand and try to understand the formula what are what are orbitals is what you have to understand and then you have to write so for example if I have to write 1 3 h 2 r determinant which is actually a part of the single x array determinant so I hope you understand what I mean by 1 3 these are all determinants actually just to make sure that you understand okay these are all determinants so what will be the result there will be no 2 electron integral 1 electron integral there will be only 2 electron integral and this will be 1 3 2 r 2 4 sorry and of course then you will write it in regular integrals this 1 3 2 4 minus 1 3 4 2 then if I give you what is spins you have to do spin integration so all that will come up after that so please understand the steps because right now I am writing as a generically as a in terms of spin orbitals and then of course I will specify the spin orbitals in terms of space orbitals with alpha and beta and that will help you to spin integral you cannot spin integrate unless you know that okay so this is how it goes so you have to apply depending on rule BC the depending on the determinants either rule A rule B rule C so HSS will of course not be diagonal or even block diagonal there may be some lots of elements which will be 0 but very hard to identify lots of elements will still be 0 okay even here but that is because of symmetry otherwise if you see the maximum difference between a psi A r and psi B s is how much for any any electron problem 2 it cannot be more than even for any 10 electron 20 electron does not matter because rest of the orbitals are same spin orbitals are same the only difference in here already A is replaced by r so r is there here B is replaced by s so if A is not equal to B r is not equal to s the occupancy difference is still only so I think this is what would be interesting is if A is equal to B r is not equal to s so that is a case where 1314 where A is equal to B that is 2 which is replaced by 3 and 4 so you can see the difference is only 1 in the case A is not equal to B r is not equal to s the difference is 2 but it will never be more than 2 so by slater rule every term will survive actually this is very interesting part for the single single block every term will survive but it practically becomes sparse because of spatial symmetry that is a completely point that we are not discussing that is part of the group theory that is certain integrals like 1323 like this or yeah 1324 let us say it normally should survive by slater rules but it may still become 0 because of the symmetry of 1, 2, 3, 4 that when I actually calculate the integral it becomes 0 that is that cannot be predicted by slater rules that comes of a different symmetry so please understand that there are different levels slater rule directly says certain things will be 0 okay triples 3 occupancy 4 occupancy they are all 0 single occupancy based on Hartree-Fock is 0 because of Brillouin theorem only from Hartree-Fock to singles but not from 1 singles to 1 singles okay so do not use Brillouin's theorem for this because I know a lot of people will use just as 1, 2, 1, 2, 1, 3 is 0 1, 3, 2, 3 is not 0 I hope you understand what I mean because this is no longer Hartree-Fock Brillouin's theorem is only when there is a Hartree-Fock determinant and H and S i n that is 0 so when you apply Brillouin's theorem be careful okay so in principle they may survive but because of spatial symmetry when you actually calculate the integral the integral may turn out to be 0 so there is a lot of sparsity that will come in the matrix I hope all of you know what is sparse matrix what is sparse matrix where most of the elements are 0 okay or many elements are 0 so this sparsity index is very important how sparse is my matrix if the matrix is a very sparse then it is good for me because most of the elements are 0 people should know how to do it because normal algorithms will not work because you do not want to process zeros you want to only process elements which are non-zero and this is a very important technique in C i how to write an eigenvalue equation involve eigenvalue solution involving only non-zero elements so if I write a matrix times matrix you normally write a loop driven code that will not work okay so the matrices are sparse so that is a good thing even for singles to singles block and later on of course you will see much more sparse matrices will come when you go further because of later rule alright so the point that I am trying to say that for ground state C i s is not important and even for excited states C i s gives results but they will not give good results obviously but they give reasonable results.