 I hope everybody had a nice holiday weekend and got rested up for the spring finish. We have two more weeks for the first time in, I guess, three weeks. We're going to have a full schedule this week and then also next. And as you may recall, last week before we left for the break, we're talking about various kinds of matrix manipulations and some matrix mathematics. And so what we're going to do today at the start is we'll finish up with the background on matrix manipulations. And in particular, we're going to talk about a very important class of matrix equations which are called eigenvalue equations. So I'm going to start by sketching what the problem is on the board. And then we'll explore what I explained to you now using Mathematica. OK, so eigenvalue equations are ones for which we have a matrix, which I'll call capital A, and denote with double squiggles. And I'm going to assume that this is a square matrix with n rows, little n rows and little n columns. And now if we multiply that by a vector, n-dimensional vector, so this is going to be a vector that has n rows and one column, then if that can be written as a constant, which I'll call lambda, times the vector itself, this is what's called an eigenvalue equation. OK, and lambda is what's called the eigenvalue. All right, now these types of equations arise in many, many places in physics and chemistry. And those of you who are in Chem 131 right now, you know about eigenvalue problems in the context of the Schrodinger equation, which you're going to learn about in Chem 131B is a matrix formulation of quantum mechanics in which the problem can be expressed in this form. The problem of determining, say, the electronic structure of a molecule and the energies of the orbitals. OK, in fact we're going to do an example shortly, probably tomorrow, from orbital theory of aromatic molecules, and that type of problem is one that you will cover in some detail in physical chemistry. But the bottom line is there's many, many problems that can be expressed in this form, and so it's useful to understand how it is that you can determine these values, lambda, because that's one of the quantities that you're typically interested in determining in eigenvalue problems. And then as well, the values of these vectors, which are called eigenvectors, is also part of the problem. OK, now, just a quick crash course on what one actually does if you want to try to solve this problem. So first of all, we can write this equation as A minus lambda times the identity matrix times the vector eigenvector is equal to zero. So all I did was move the lambda x over to the left side, and this just turns lambda times the identity matrix is just a matrix whose diagonal elements are lambda. So this is a n by n identity. All right, so now what we want to do is solve such an equation to determine the lambda, that the lambda will solve that equation. Now, one possible solution is that x is a vector with all zeros and zeros, and that's what we call the trivial solution because it's not interesting. OK, so for non-trivial solution, the way we can get the lambdas that satisfy this equation is that it amounts to saying that the determinant of A minus lambda times the identity matrix is equal to zero. OK, now, it may not be obvious, but when you actually form the determinant and solve this equation, what this is going to give you is n roots, which we can label lambda i, i equals 1 to n. OK, because this determinant equation here will give you an nth order polynomial in lambda. So that's essentially how you determine the lambdas. Now, to determine the eigenvectors, the various x's, so now there's going to be, as well, n vectors x that satisfy the equation, one for each lambda. We need to do a little bit more work. So recalling that there's one of these equations for each i, we can actually collect all of them together by saying that A times now a matrix, capital X, is equal to a matrix lambda times capital X. And what this x is here, this is a matrix formed by taking each of the x's here. So there's x1, x2, x3, dot, dot, dot, xn. And I'm going to put them as columns into this x matrix. So we'll have x1 vector x2 vector dot, dot, dot to xn. Okay? So these are each n long. So we've formed a square matrix that contains all of the x's that satisfy this equation, well, this equation. All right? And then this is a diagonal n by n diagonal matrix with elements lambda i. All right? So we'll see this explicitly in a couple of minutes. All right? So this now provides the basis for determining the x's. And here's how we do it. Okay? So first thing we do is we're going to multiply through both the left and the right hand side by the inverse of x. All right? So if we do that, then we get x minus 1 a x is equal to, since this is a diagonal, it can be moved out here if we want. So that would be x to the minus 1 x lambda. And x to the minus 1 times x is just the identity matrix. So then this gives us lambda. Okay? So here's the punch line then, right here, for how you determine the x's. Okay? What you want to do is find a matrix x that by the process of sandwiching a in between its inverse and x, you create a diagonal matrix. Okay? So in the language of eigenvalue problems, the matrix of eigenvectors x is the one who diagonalizes the matrix a. That's the language that we use. All right? And this particular process of diagonalizing a, see, by sandwiching a between x to minus 1 and x, we get a diagonal matrix. This has a name that's called a similarity transform. Okay? So just to recapitulate how it is that we solve eigenvalue problems, and you'll see we don't actually have to solve them because Mathematica will do it for us easily. But just to give you the background, we have a general problem like this. Appears many places you'll see in physical chemistry. All right? You solve this polynomial equation and you get the lambdas. And then you find the matrix x that diagonalizes a. And then this gives you the columns corresponding to the vectors x1 through xn. All right? So let me just give you one quick preview of what we're going to do probably tomorrow. So the problem tomorrow is going to be, this is going to be Hamiltonian. This is going to be wave function or orbital coefficients specifically. This is going to be an energy of the orbital. And then this is going to be the orbital coefficients. All right? So this, you can write quantum mechanical equations for the energies and orbitals of a molecule in matrix form. In fact, when you use Spartan, all of you have used Spartan I presume, you're essentially solving such an equation on the computer. Okay? So we're actually going to do that tomorrow. But for now, I just want to explore some aspects of the eigenvalue problem and show you how to use Mathematica to solve it with some simple examples. So let's go ahead and do that. Turn on the screen here. All right. So we're going to start really simple. We'll just do a little 2 by 2 matrix. And so I'm going to define the matrix A equals, curly, curly, it's going to be, the first row will be 1, 2 and then the second one will be 3, 4. All right? And now we can look at A in matrix form. And so there it is. All right? Now, we don't actually have to do this to solve problems, but I just want to show you some detail of how it works. That equation over there that says debt of A minus lambda times the identity matrix equals 0, that's what's called a secular equation. All right? So let's go ahead and see what that looks like for our matrix here. All right? So I'm going to define secular equals determinant A minus, and then we can put a lambda in there if we want, lambda times, and to get an identity matrix with n equal 2, we can say identity matrix bracket 2. All right? So that just gives us a 2 by 2 matrix with 1s along the diagonal. All right? So if I enter that, notice I get a quadratic second-order polynomial equation for lambda. All right? So these secular equations for an n by n matrix end up being nth order polynomials, which we could solve if we want. We'll see in a minute that there's a quicker way to get to the answer. But just to show you how it works, we could say solve secular equals equals 0, and then we're solving for lambda. So how many roots should we get? Should get 2, and those are the two eigenvalues that satisfy the equation that matrix times an eigenvector is equal to eigenvalue times that eigenvector. Okay? And just for completeness, I'll show you another way that you could generate the solution or the secular equation in Mathematica. You could also say secular equals, and there's a command called characteristic polynomial. It's a lot of typing. And then you say what the matrix is and then what the variable is. Okay? So this gives exactly the same thing as here we formed it by hand. We said give me the determinant of a minus the eigenvalue times the identity matrix. This command here basically just does that. Okay? All right. So the point of this is to show you that in fact the solution of the eigenvalue problem for the eigenvalues consists of finding the roots to this secular equation. Okay? Now, there's an even easier way to do it, so in general you're not going to do this, all right? This is just for illustrative purposes. If you have a matrix, a square matrix, and you want the eigenvalues, you don't need to go through this business of setting up the secular equation and solving it. Mathematica has a nice little command that allows you to cut to the chase and so you could say directly, for example, lambda equals eigenvalues of A. All right? And when you do that, you see that you go directly to the solutions of the secular equation, all right? So when you want to calculate eigenvalues with Mathematica, just use the eigenvalues command. Also, you may want the eigenvectors and the way you get those is you just ask for them eigenvectors of A. Now, what should we get here? Let's think about what it is that we should get. Anybody want to venture a guess? All right? We have a 2 by 2 matrix. So how many eigenvectors are there? Well, we've seen that there's two eigenvalues and for each eigenvalue there should be an eigenvector. So we should get two eigenvectors and each of them should have length 2. Correct? All right? So let's see then what we get. And what we get is, in fact, a list of two eigenvectors. The first one is here. It has first element here, the second one, and then the second eigenvector is here. Okay? Now, I personally like to get the eigenvalues and eigenvectors separately because they tend to be useful when they're separated. But if you wanted them all in one shot, there's a command called eigensystem. All right? So if you say eigensystem of A, what you get is the whole collection, first a list of the two eigenvalues and then the two corresponding eigenvectors. So this eigenvector corresponds to that eigenvalue and this one to that one. Okay? All right. Now, the next thing I want to do is I want to show you, we're going to verify this similarity transform. All right? So what I want to do is I want to show that if I form a matrix X whose columns are going to be first this one and then this one. So what I'm going to do is I'm actually going to transpose these. This is a row vector now. I'm going to transpose it into a column and I'm going to pack those two into a matrix I'm going to call X. Then, we're going to calculate the inverse of X and we'll multiply the inverse of X times our original matrix A and then times the matrix X and we should get a diagonal matrix whose elements are these guys. All right? Just to show you that it works. Okay? So here's how I'm going to form the matrix X. So, X equals transpose and then the eigenvectors of A. All right? Put a semicolon and then we'll look at it in matrix form just to make sure it looks right. Okay? So now notice this eigenvector has been put like that and it now shows up here. That's X1. Okay? And then this one has been transposed and put in here. X2. All right? So X1, X2. So that's our matrix of eigenvectors. All right? Now let's go ahead and get the inverse. I'm going to say XINV equals inverse of X. Okay? And then what we'll do is we'll say XINV dot A dot X. And I'm going to put that in matrix form. Okay? And the similarity transform tells us then what I should get out is a diagonal matrix whose 1, 1 element is here and 2, 2 element is here. And maybe we need to simplify. Oops. Okay. So there you have it. We force, by putting this in here, we force Mathematica to do some algebra to clean things up. All right? So as advertised, 1, 1 element is the first eigenvalue and the 2, 2 element is the second eigenvalue. So you see that it works. And what I hope you see even more than it works is how easy it is when you have a tool like Mathematica. Has anyone in here ever solved eigenvalue problem by hand? I know a couple of you have had linear algebra, right? And it tends to be, well, for 2 by 2 it's not so bad, right? But it quickly becomes very unpleasant as you get bigger and bigger matrices. So this is kind of nice, huh? Yeah. All right. So that's a simple illustrative example using a 2 by 2. So the next thing I want to do is we'll just make a slightly more complicated example of a 3 by 3 just to walk through and do it one more time so you see another example. And then we'll have covered all the material that you need to know to do your homework. So I'll go over the homework assignment with you. All right. So now I'm going to define a 3 by 3 matrix. I'll call it A again. A equals, curly, curly. And this one's going to be, first row is 1, 2, 3. Second row is 2, 2, 2. And the third row is 4, 3, 3. All right. So let's just have a quick look here. By the way, let me, I want to preview a very common error that can be made that can be a little bit difficult to track down when you're working with matrices, all right? So I don't know if you've noticed, but when I enter matrices I enter the matrix and then I look at it in matrix form. So let's do that here. Okay? All right. So there it is. So this defines the matrix properly and this lets me see it in matrix form. And once I've got the matrix in there I can do stuff with it. So for example I can say, give me the determinant of A. All right? And I get a number as I should. Now what you should not do is you should not do something like this, define a matrix in matrix form, all right? See the difference here? Here I'm actually setting A equal to the matrix form of this matrix. This is a graphical object. It's not a mathematical object in Mathematica. So watch what happens if we do that. It looks fine, but now if I try to get the determinant, I get the determinant of a graphical object which doesn't make any sense. Okay? So this is just a warning to be careful. I think it's always nice to look and make sure you entered your matrix properly by looking at it in this format, but always do it after you define it. Don't set anything equal to a matrix form. Otherwise it's unusable for further calculations. And it's something that's done commonly and when I was working on the solutions to the homework I did it myself and just be aware that that's a common slip up. Okay. So let's go back and clean this up. Put it back in matrix form. Okay. All right. Now I'm going to just go ahead and directly get the eigenvalues. We'll set them equal to lambda. That's a common notation. All right. Did I make sure I enter this? Okay. And notice that I get something interesting. All right. Now it kind of looks weird. And it looks weird because there was no sort of exact form of the roots for this. It's going to be a cubic equation. Or at least it's not able to spit that out. Okay? That's not a problem if you don't mind working with numbers. Okay? So what I could do is say, instead of saying lambda is equal to the eigenvalues of A, I'll just go ahead and put the N command out in front and I'll get some numbers. All right? So I should get three numbers, three eigenvalues because I have a 3 by 3 matrix. All right? So now I get the three numbers. Okay? Now let's have a look at the eigenvectors. All right? And you see that also looks like garbage. So we can just go ahead and put the N out in front around it here. And we get as expected three vectors, so a list of three vectors, each of which has three elements. All right? Because we're working with a 3 by 3 matrix. Okay? So if we were interested in solving the eigenvalue problem for the eigenvalues and eigenvectors of this matrix, then we're done. See how easy it is. But again, just for fun and for practice, let's go ahead and use these results, manipulate them to see that in fact the similarity transform as written over there works. So once again, I'll say x equals transpose I'll leave the N in there, so I get nice numbers. And I'll put a semicolon. And then I'll look at x in matrix form. Okay? So there we have our matrix of eigenvectors. This is x1 corresponding to the first eigenvalue, x2 to the second, and x3 to the third. Okay? We can say xinv equals inverse of x, semicolon. And then finally, let's check the similarity transform, which is xinv.a.x equals, or no, not equals, let's just leave it like that. And put it in matrix form. Well, we'll get to that in a second. All right? And what we see is that we don't get a diagonal matrix of eigenvalues, all right? You can see that the eigenvalues are in fact along the diagonal, but then there's these other numbers in here kind of polluting our beautiful diagonal matrix. Now these are likely just due to numerical imprecisions. Okay? So we can get rid of that. By putting in something we've seen before, which is called the chop wrapper, which will remove, eliminate numbers that Mathematica thinks are imprecise. All right? So we can just say give me chop of that. And now you see that in fact we've got a nice clean diagonal matrix whose diagonal elements are the eigenvalues. So once again proving that the matrix X is the matrix of eigenvectors and the similarity transform is correct. All right? So there's your introduction to solving eigenvalue problems using Mathematica. Okay? And those of you who haven't had a course in matrix, mathematics, linear algebra can't appreciate what a wonderful simplification, what a nice tool this is, but I assure you that in fact it is. Now let's go ahead and have a quick look at the homework assignment since we know everything we need to know to do it now. Okay? So the first problem is a problem that is going to involve doing calculations with vectors. All right? So what I've given you in this table are Cartesian coordinates for the three atoms in the molecule NOCl. Okay? So this is X, Y, and Z coordinate of nitrogen, oxygen, and chlorine. Okay? Now each of these sets of three coordinates defines a position vector. So for example I could say Rn equals 0, 0, 0. And R0 equals that, and Rcl equals that. Okay? So R0, Rn, Rcl. Okay? So what I want you to do here is based on these coordinates calculate the bond length of the NO bond. And that amounts to, this is just the distance which is the magnitude of the vector pointing from end to O, which is the magnitude of the difference between R0 and Rn. Okay? So you're going to enter Rn, R0, Rcl. You're going to calculate the difference between R0 and Rn and get its magnitude, and that's the bond length of the NO bond, and then you're going to do the same thing for the CLO bond. Okay? And then finally you're going to calculate the bond angle, which in terms of these vectors is given here, which is just a rearrangement of the relationship between the dot product and the cosine of the angle between two vectors. Okay? Everybody understand what I'm asking for there? All right. Okay, so the next problem, this is just a very simple one but just to get a little practice. So in physics as you probably know, the angular momentum, which is indicated as the vector L here, is the cross product between the position and the momentum. All right? And so if you define R is equal to XYZ and momentum is equal to momentum in X, momentum in Y, momentum in Z, then L will have components that I want you to evaluate. So basically I just want you to enter these two guys using some reasonable notation and get the cross product to see what the components look like. Okay? So that's a very straightforward problem there. Okay. Next thing is I just want you to explore some relationships from linear algebra concerning matrices. All right? So we have two matrices here. I want you to enter those in and then verify so you can calculate AB and then BA and look at them. You'll see that they're not the same. These are not symmetric matrices. They don't commute. So that's just a reminder to you that in general matrix multiplication is not commutative. It's a useful thing to know. Okay? Here's an interesting one. If you take the product of A times B and take its determinant, that is actually equal to the determinant of A times the determinant of B. Is that ringing a bell from people who took linear algebra? Okay. And then you're going to calculate the inverse and show that inverse of A times A is equal to the identity matrix and A times the inverse of A is equal to the identity matrix. Here's another interesting one. If you take the determinant of the inverse, that happens to be 1 over the determinant of the matrix. So I want you to show that also. And then that's it for that problem. Okay. The next thing I want you to do is put in this matrix which has some variable X in it and then just use the solve command to solve the equation determinant of A equals 0. Solve for X. Okay? And you'll get, this is a 4 by 4 matrix so there should be 4 roots and you'll get them. And once you enter the matrix, it should be one line. All right? And then finally, I want you to enter this matrix and then just do what we've done already twice here today. All right? So find the eigenvalues, the eigenvectors and then verify the similarity transform. All right? And this will get you nice and warmed up for doing what I hope will be a fun and interesting problem in next week's assignment where we're actually going to use what we've learned here to solve for the orbitals of a conjugated organic molecule. Okay? All right. So let's see. We've got a few more minutes. So I'm going to go ahead and start with a few miscellaneous examples where we can use some of the things we've learned from the vectors and vector analysis and matrices that are relevant to things that you will see in physical chemistry. Okay? So the first thing is this one's sort of more for your entertainment only. The second example will actually be relevant to a homework problem. I want to revisit the vector analysis package and show you how you can have access to alternate coordinate systems and in particular the spherical coordinate system which shows up a lot in your physical chemistry course. All right? So let's go ahead and load the vector analysis package. So that's less than, less than vector analysis, backward single quote, enter. Okay? And now I'm going to change my coordinate system which by default is Cartesian. So I'm going to say set, oops, capital set, coordinates. And the system we're going to use now is called spherical. That's the spherical polar coordinates that Mathematica knows about. And now I'm going to define the symbols I'm going to use for the three coordinates. Actually, let me go here first to show you what we're doing. Okay? So this is actually what we're talking about here. It's a coordinate system where we go from the Cartesian x, y, z to one distance and two angles. Okay? So r here is now the length of a vector pointing from the origin to the point of interest. Okay? And then we can define two angles. One is called the azimuthal angle theta which tells us how far this vector is, well, what's the orientation of this vector with respect to the z-axis? And then we have this other vector which is called the polar angle which tells us how far or what's the angle between the x-axis and the projection of r onto the x-y plane. Okay? So those of you who are in Chem 131 are familiar with this coordinate system because this is a convenient coordinate system for solving the quantum mechanical problem of an electron orbiting a nucleus, right? Okay. And these are standard notation, r, theta and phi. All right? So that's what we're actually going to use here. We're going to define our spherical coordinates to be r, theta and phi. Okay? All right. So if I enter that, it just says verified r, theta and phi. And if I ever want to know what coordinate system I've set, I can just say coordinate system and it tells me I'm in spherical coordinates. If I want to know the ranges over which those coordinates are defined, I can ask for it, coordinate ranges and if I leave a blank bracket, it will give me all three. And so it tells me that r ranges from 0 to infinity, theta from 0 to pi and phi from minus pi to pi. So phi is the whole circle and theta is the semicircle. All right? Now, other useful things that you can do is you can ask what are r, theta and phi in terms of the Cartesian coordinates? So I can say give me the coordinates from Cartesian bracket and then I list what I'm calling Cartesian. So I'll say x, y and z and I can see here that I'm going to have to clear x. So let's go ahead and do that. Okay? So let's see what this gives us. All right? Now, those of you who've studied the spherical coordinate system before probably recognize these formulas. So this is the definition of r in terms of the Cartesian coordinates, right? It's just the square root of x squared plus y squared plus z squared. This tells you how to calculate theta from the Cartesian coordinates of a point and then this is phi. Does that look familiar to anybody? It should look familiar to those of you who are in Chem 131 at least. Okay? Now, what if you wanted to know how the Cartesian coordinates are defined in terms of the spherical coordinates? All right? The way you could do that is you could say coordinates to Cartesian. And this should also give some familiar. So now we put in our spherical coordinates r, theta and phi and now it's going to tell us how x, y and z in the Cartesian coordinates are defined in terms of r, theta and phi in the spherical coordinates, all right? And again, that's a formula that should look familiar to some of you, all right? So x is actually defined in terms of r, theta and phi as so and then y and z. Okay? All right. Now, just to finish up today, we talked last week about vector derivatives and I'm not going to go through every possible one of them but I'll show you a couple of the ones that are somewhat interesting, especially if you've studied the hydrogen atom in your Chem 131 class. So the first is I can ask for the formula of the gradient of a function, a scalar function of r, theta and phi. Okay? All right and if you enter that, then you get the following. Okay? So just to remind you what this notation looks like. So this now is, remember the gradient takes a scalar function and gives us a vector whose components are the derivatives of the scalar function with respect to the three coordinates. All right? So in the case of Cartesian coordinates, it was simple. This, the first component was just the derivative of the function with respect to x. The second, the first derivative with respect to y and the third, z. Notice when you go to alternate coordinate systems such as the spherical coordinates, the formulas are a bit more complicated. Okay? So this is the component of the gradient in the r direction, which is simple. It's just the first derivative of f with respect to r okay? But then when you go to theta, it's the first derivative of f with respect to theta divided by r. And then when you go to phi, it's the first derivative of f with respect to phi divided by r times the cosecant of theta. All right? And then those of you who are in Chem 131a probably saw or should have seen the formula for the Laplacian in spherical coordinates. All right? So that's del dot del. All right? So we can say give me the Laplacian of r theta and phi. Now this is going to give us a scalar result. And as some of you have already seen, that looks fairly complicated. In fact, let's simplify a bit here. All right. So in any case, notice it's just a single element. So it's a scalar function and here's the part that has the second derivative with respect to phi in it. Here's the part that has second derivative with respect to theta. And here's the part that has the second derivative with respect to r. And then here's another part that has a second derivative with respect to r. Okay, so it looks kind of nasty, but those of you who are in Chem 131A know that there are certain advantages to switching from Cartesian coordinates to spherical polar when you want to talk about the solution of the, say, the hydrogen atom, electronic structure problem. Okay, so it looks like we're about out of time for today. So next time what we're going to do is we're going to see how we can use Mathematica's eigenvalue solution facilities to actually solve some interesting problems of organic chemistry or the electronic structure of conjugated organic molecules. So something to look forward to tomorrow.