 So, good morning everyone. I'm not yet starting. Essentially, I will first continue the lecture last time. I will try to finish and wrap up and repeat the most important points I have made. And then I will, I've decided to switch the two talks I wanted, I mean the other two lectures. And I will present first lecture on the total energies which more neatly fits into what is presented, what has been presented yesterday. And that gives us an opportunity to repeat a lot of things again. And I think repetition is good for remembering those things, right? Yes, I think I want to wait another two minutes. Because I know breakfast, I mean it's early, right? 8.30. Yeah, they are still, so I will count if there's nobody coming in for 20 seconds I will start. One, two, three, four, five, six, seven, eight. Okay, so let's start. So, this is, again, I mean, I try to wrap up what I told you last time. Essentially, the basics of many-body perturbation theory and it can be phrased in many different ways is a two, two theorems. One is the Gelman-Low theorem that I've, that essentially connects the Hartree-Fock ground state or another one particle or another single-slater determinant to the interacting ground state. So, this here is the interacting ground state. This is the amputterp ground state here in this notation. And this actually is a equation that gives you the ground state energy of the amputterp state for the amputterp Hamiltonian. And this here is essentially the correlation energy in most cases. To evaluate this, you need the time evolution operator. So, you switch other vertically on the perturbation at minus infinity. You start with your simple Hamiltonian H0 and then you switch on the many-body Hamiltonian which is fully switched on at t is equal to zero. So, you need this time evolution operator and that has an extremely simple structure. Essentially, this is the Hamiltonian perturbation in the, this is the perturbation at time tn. And here you have the sum of n is equal to zero to infinity minus i to the power of n. And here you have integrals and they are in this version here. They are time ordered. So, actually t0 is your starting time, usually minus infinity. And then you have other times that they are necessarily strictly ordered in a certain sequence. So, they are increasing from the right to the left. Now, if you slot that in, you see essentially that the terms that pop up always involve h1, for instance, at time zero and h1 at other times, right? So, these are the essential terms that will be encountered. You then put in h1 in second quantization and then you use Vick's theorem to evaluate the encountered vacuum expectation values. So, this here is actually a final theorem which, sorry, these here are actually the initial h0 ground states. So, these are the non-interacting ground states, half the top ground state usually. And then if you plot that in, you see essentially h1, these are the perturbations. And products of h1 and these are essentially the quantities that you need to evaluate. And since this can be rather tedious, it turns out that you can transform these using Vick's theorem. And then so-called Goldstone diagrams, you can transform this into quite simple diagrams. Actually, the rules for these diagrams are that important gradient is the time. And then you draw blobs at different times. For instance, this is t0, this is our starting time, this is our time t1. So, here in this case we have two time points. Your Coulomb interaction is always a horizontal line. So, it is instantaneous. Coulomb interaction is instantaneous. It doesn't have any retardation. So, you emit the photon and it's immediately re-absorbed. And now at any of these vertices you have actually outgoing and incoming lines. They can be either going backwards or forwards in time. So, the only rule really is that there's one error pointing away from this vertex and another one pointing in towards this vertex. So, in this case for second order, this is one of the diagrams that is encountered. Second order means you have two Coulomb lines. You could also one Coulomb line and one, this is the potential that you switch off, the Hartree-Fock potential, the effective one-electron potential that you used in your initial Amperturbed Temetone in H0. That needs to be subtracted in perturbation theory. And from this subtraction you get these one-electron terms. And they also contribute to the order. So, they also count in terms of the order. So, this diagram here essentially is the potential that you switch off, the kind of initial potential. And also this potential observes the rule that you have one outgoing and one incoming line. Could be something like this. Now only, the only thing you need to do is, you know, need to draw all conceivable diagrams that you can come up with using these kind of rules. There's not much more. And then once you have done this, you can convert these diagrams into algebraic equations. So, this is really a way of bookkeeping because it becomes a rather tedious to do this Vicksy rim algebraically. You rather do it actually using these goldstone diagrams. Again, the order is given by the number of Coulomb lines. So, essentially draw all conceivable closed diagrams to get the correlation energies the diagrams need to be closed. So, there must not be a lurking open line that is not legal. Again, there are simple rules to convert this specifically the rules for the prefactors. They are related actually to symmetry. There are simple rules for the denominator. The denominators come from the integration of a time. Actually, you then need to integrate, for instance, over all time differences, T1 minus T0. So, you integrate, yes, T tau for instance from T1, from some starting point to T1 minus T0. So, this is the initial time and you allow, you need to integrate over all possible time spans between the initial time and the final time. This yields a denominator, as you will see. And finally, there are also rules for the sign. The prefactors are, I'm not going to discuss this in much detail. Essentially, the Coulomb potential has a factor of one half usually attached to it. If you look back at second quantization, you sum over all part, over all, over the, well, you know that actually in the Coulomb potential you have usually a factor of one half, yes. And essentially this factor of one half is usually cancelled or killed by symmetry except if the diagrams have a left-right symmetry. So, if the diagram has a left-right symmetry, one factor of one half prevails. That's about the only rule about symmetry. Then you see here, you have here an integral over T0, over different, well, actually over T1, T2, T3, T3, T4, T5 and Tn. So, you have an integral over different times here in this term here, starting with T1. T1 is the last time, the one with the largest time attached. And they are strictly ordered here. And this, I was asked yesterday, what's the difference to Feynman diagrams? In Goldstone diagrams, you don't introduce, this is for experts, really, you don't introduce this time-ordering operator. You actually leave this ordering here as it is initially in perturbation theory. You can actually rewrite this and actually get rid of this time order here and introduce the time-order operator, but then you actually have to add in one over, well, you have to add a factor n factorial or one over n factorial, and then you actually transform these integrals to unconstrained integrals. And then, in this case, you come up with Feynman diagrams, but here we talk about these Goldstone diagrams. So, in the Goldstone diagrams, you actually put in explicitly a certain order for the integrals. So, the time needs to be ordered the way it is here. The next trick is then to transform to the time differences. So, T0 is actually a time zero. So, you transform to the time difference between time zero and T1. Then you introduce the time difference between T1 and T2 because these are ordered, these are always positive numbers, right? Because these are ordered according to this rule here. So, you transform to the time differences. And now for all time differences, you need to do an interval between zero and infinity, yeah? So, any of these time differences need to be integrated from zero to infinity. That's what I told you before here, but second order, you need to integrate for the time difference from zero to infinity. For instance, if you have a third order diagram, and I draw one of the many third order diagrams, something like this, you need to integrate over this time difference, T0 minus T1. And you need to integrate over this time difference, T1 minus T2. I hope I got this right. No, yes. T1 minus T2. And for each of those, the time integrals runs from zero to infinity. Oops. So, for each of these intervals, you allow the time to vary from zero to infinity. And here as well, from zero to infinity. And what happens then with these exponents? Yes. Actually, each of these lines also have an exponent attached to it. And that exponent is, I've already touched up on this. This comes essentially from the interaction picture. So, this guy here describes how an electron will kind of oscillate in the Schrodinger picture in this state A. Nothing really special about this. So, if an electron propagates in this state here, it will have attached this time evolution factor, which is nothing but how a state will oscillate if you solve the time dependant Schrodinger equation. So, you put in an additional electron into this state A. And that gives you this kind of oscillating factor in time. Well, there should be an I here. I don't know why I removed the I yesterday. That was not very wise. Here as well, there should be e to the i epsilon iT, e to the minus i epsilon dT, e to the i epsilon jT. So, I did this yesterday evening. And obviously, I was already a little bit high at the end of this. Nikolain was probably already a little bit drunk when I removed the I. So, these are just the time evolution factors, the typical factors he would have in the Schrodinger picture. And then you need to integrate this, this, the product of these factors from zero to infinity. And there is a closed equation for this, and this gives you simply this term here. So, if you integrate this term here from zero to infinity, then this gives you, you can look that up in Bronstein or in any mathematical formula equation, you will see this is exactly this kind of denominator, okay? Integrate this, you get this here. So, this is really nice. So, for any time interval, let's see a third order diagram here. And actually, this was correct. I just didn't look at it carefully enough. This is a third order integral. So, for this time difference here, you integrate from zero to infinity. And then you integrate this time difference from zero to infinity. You can attach these exponentials to the lines. And then you will see that for this time interval from zero to infinity, if you do the integral, you have epsilon A, this is this here. Okay, first the second interval, you have epsilon A, this is this here. Epsilon C, then you have K, epsilon K here. And you have I, this is this line here coming in here. So, you just look at what lines are there, A, I, C, K. And this gives you the denominator. Here, you have actually lines A, B, J, I. And this gives you this denominator. So, the denominator comes from the integration of the phase factor in the Schrodinger equation from zero to infinity. And that gives you this beautiful and simple expression here. The vertices, any of the Coulomb lines has attached these, so-called two electron four orbital integrals. And there are exact equations for those, which are very simple. Essentially, this V, A, C, I, K is exactly the Coulomb potential V, evaluated between A, I. So, these two indices are at the same position in space. A and I needs to be at the same position in space. So, this gives you the direct notation this term here. And then you have CK, which will come up here, KC, in the complex conegated fashion. So, this is essentially the orbital phi A at position R, complex conegated. This is the orbital phi I at the position R. And it's not complex conegated because it comes up in the graph. Divided by R, minus R prime. And then you have actually phi K complex conegated at the position R prime. And the state phi K C at the position R prime. And that's, again, not complex conegated. And you integrate this over R and R prime. Yeah? So, this is your orbital. And this here is the famous zero notation for the two electron four orbital integrals that typically is used in the quantum chemistry community. So, this is the link cluster theorem. I will not go through this, I will just tell you. You're not allowed to include disconnected diagrams. For instance, in fourth order, you might think that you have diagrams in fourth order. You might have diagrams like this that are not connected to each other. This is obviously a fourth order diagram. It's completely closed. So, there's one Coulomb line, another Coulomb line, another Coulomb line. But the two diagrams fall apart. Yeah? So, they are disconnected. And you are not allowed to include those in the perturbations here. That's what the link cluster theorem tells you. Actually, you can show that you get otherwise an infinite correlation energy, but it drops out against the denominator here. And these two guys cancel exactly. This one I will skip. For instance, I give you one example, and I've already shown this yesterday. These are all second order diagrams. So, they have either two blocks. This is the change of the one electron potential, or that one that we switch off. This is essentially our default potential that we switch off. And then you switch on the exact many electron potential. This is here. Second order, second order, because one Coulomb line, one block, and also second order. I will come back to this and give you a kind of more intuitive derivation even for those terms, for instance. Yeah? Today in the lecture that comes in a minute, or starts in a minute. Actually, the quantum chemists tend to evaluate the perturbation theory using these closed rules for the denominators. So, what quantum chemists usually implement are these kind of equations, perturbational equations. And there's one important thing to remember. In principle, it might be often more elegant to just work, and you will see this later in the talk, to just work. Here, for instance, this diagram, this second order diagram can be also evaluated. And this here is essentially the rules, but I haven't yet done the time integral here. So, here the time is still there, epsilon i t, epsilon at, epsilon minus epsilon i j, and e to the power epsilon b. So, I have left the time in there, and I haven't done yet the time integral. I've left here the integral from minus infinity to zero, or from zero to infinity wouldn't matter. Well, the point here is, in some cases, it's better and computationally more efficient to use this original expression, and it can be much, much more efficiently the evaluation and the calculation. This is something quantum chemists hardly ever actually used. And actually, it turns out that these guys here, as I've written them down here, are actually the Green's function for one electron Hamiltonian. And I will come back to this a little bit later. And one trick and one good thing to do is to define these Green's function of one electron propagators. So, these guys, you just define them essentially as a Green's function, and you store these guys in your computer. And it can be possibly much more efficient than doing it the quantum chemistry way. But I will come to this again a little bit later. Again, I was asked yesterday what's the difference between Feynman diagrams and Goldstone diagrams, and I have really no time to go into this. In Goldstone diagrams, I've already mentioned this. You have the time order here in the integrals. So the times that are involved here are strictly ordered. And if you want to go to Feynman integrals, you give up this particular time order. So you actually have two time integrals, T1 and T3, without any specific time order. And then you need to introduce a time order, time order operator. And if you do that, rules are a little bit different. Only here, this slide I want to show, it's actually will be on your PDFs that you can download. In Feynman diagrams, this is one single diagram. And the point here is that in this case, there's no strict time. So here you have a strict time order. So T3 is first, then T2, and then T1. So there's a strict time order between these. And these diagrams, these three diagrams are distinct in Goldstone theory. If you use Feynman diagrams, only the topology of the diagram matters. And this diagram maps onto this, that, and this diagram. So you have one diagram to describe three Goldstone diagrams. So this is really for the experts and really too much for this lecture. So any questions? Of course, again, I want to mention you are allowed and you're welcome to ask any questions during the lecture. So now we move on to the second question, second topic. And I've already told you that I tend, I will actually switch to talks. I will first talk about total energies from many body perturbations here. I give you the framework and I will try, even now, to use the framework in such a manner that I will describe almost everything again. So whenever we counter something, I will try to give you actually a kind of intuitive gut feeling derivation as well without involving big theorem. But in some cases, that's just very hard. Actually, the Goldstone diagrams are pretty easy to construct. So total energies from, from many body perturbations here. Why would you like to do this? Actually, density function theory I've told you last time is kind of the work of theory in condensed metaphysics and material science. It describes, however, many properties not correctly. Van der Waars bonding, even covalent bonding is hardly very precisely described and strong correlation is out of reasoning. I want to mention here that the methods I'm talking about here are not suitable for strong correlation. That must be very clear. They're totally useless for strong correlation. So that the reason being that in this case there is no adiabatic connection between you have the foreground state and the trume and the electron state. So you cannot anyway use perturbations here. It's just totally useless. That is actually a strong restriction. It implies you cannot do often bond dissociation. So bond dissociation using that kind of methods is extremely hard because in bond dissociations, you often change from one state to determinant to the other. So you have something like God now for God and he has a name. Well, at the transition state, you often switch from one later determinant to the other. So you have two mean field states, but at the transition state where the barrier is, you often have a mixture of two later determinants contributing to the many electron wave points. That's where, tentatively, these methods often fail. Yes, I will leave that out. Just to give you an idea how inaccurate density function is here is for materialism modeling and this is without code. So we have calculated aluminum metal and combined it with the nitrogen dimer and two and formed our minimal nitride. And it's well appreciated that this is the experimental value and the theory where you predict it with a function that many people use. The PBE function, you get a dreadful description. This is not a 5% error. It's like a 20% error in the formation energy. This generally happens if you take a metal and gas phase molecules and combine them to form insulator. So this error is very, very common for almost all these kind of fractions. Here's another one that is a little bit simpler, magnesium metal hydrogen and forming magnesium hydrogen. Again, the same 20% error. Even silicon and carbon, if you combine it to form silicon carbon, it's a large error in the predictions. Now, there are functions that are a little bit better, but there are hardly any fun... Well, you can always find a function that gives you the right value in density function theory. That's the rule of thumb. And then you publish in nature and you are happy because it compares well with the experiment. Here's another issue. If you absorb carbon monoxide on the rhodium surface, you predict most likely, I mean, the experimental values are not so certain, much too large adsorption energy. So there are many properties that most density functions do not describe van der Waals's interaction far from chemical accuracy. Strong correlation, we are not going to deal with. Another problem is that in solid state physics, we almost exclusively compare this experiment and we rarely rely on more accurate methods. The problem being that there are no more accurate methods. So in solids, there is no way to get a very highly precise solution of the Schrodinger equation. I know you have had the talks of Ali Alavi here and he presented to you a method that potentially can be used. And as you will see here, we have used it together with Ali on solid states to essentially solve the exact Schrodinger equation. Despite that density functional series is just amazingly successful and this is from a refueled degree of Kiran Burke. And what you see here is this is the number of papers that is published. And I think he actually was here. This is the number of DFT papers published probably seem four times larger. But he specifically looked for papers that have in the abstract, either the keyword PBE. This is the function of Berdy Burke and so forth, function he himself also co-created and that P3 which is a function that was invented by Becker. And what you see here that in chemistry there's a kind of saturation but in material science it still picks up. So and this is so and this is also from his refuel. There's no simple rule how to improve the functions. There's no, yeah that's this sign. There's no simple rule whether the DFT will give you a accurate result because we cannot compare to anything but experiments. There are too many functions to choose from so you can choose the right function and get the right materials properties. And it can be learned only from DFT gurus supposedly. So and the point here is there's no function and no DFT function inside that will serve all needs, yeah. There's a huge number of functions but there's not one function that will always work. So we are working on this issue now since 2005 to find something simpler or something that can be used as black box that really goes beyond DFT. So what I will what I will try now to explain you is again what what is the quantum chemistry approach which I have in principle already introduced to you in two hours here. But I will give you a slightly different view on quantum chemistry the way quantum chemists usually discuss it. So quantum chemists don't tend to use this league theorem. They tend to use a different route and that's often in physics and chemistry there are many different ways to get to the same result, yeah. So there's not one unique way to derive the perturbation series. In fact, Merlin and Placid already derived the perturbation theory without these many body language without second quantization. So they did everything without the second quantization. It's just that the second quantization I believe is more accurate. No, it's not more accurate, sorry. It's more convenient to use. It has a nicer way to interpret. In particular, if you link it with codes from diagrams, it's really nice. And then I will talk after talking about highly accurate methods, I will talk about my favorite approximate method that can be used for materials modeling. This slide I've already shown you. Correlation essentially is that electrons move in a correlated way. So if electron one is over here, electron two will be over here. You will also, I will again come back to van der Waal's interaction. I told you if you have two electrons and two nuclei, the electrons will move in this concert depression. And I will give you a little bit more precise view on that because that was really rather loosely speaking. And all these properties are really tough to describe with classically DFT function. And really the core of it is how to describe the many electron wave function as a function of the difference between two positions, electron coordinate one, electron coordinate two. And there are two important properties. Again, this is the cast condition. It's kind of the short range correlation effects that two electrons will avoid each other even if there is no exchange. So this is for non-equal spin. If electrons have non-equal spin, you don't have an exchange interaction because they are really anti-symmetric in spin. So they can be symmetric in space. But even then you have peculiar behaviors at short distances just because of Coulomb repulsion. Because electrons repel each other by the Coulomb potential. And this causes in combination with the kinetic energy discussed here. So this is the short range correlation effects. And long range correlation means that if one electron sits here, the other one is more likely to be found at a certain specific distance. This is the same I have here. So if these two electrons are here, this is likely, this is less likely. This gives you this passing here. So the correlation energy in quantum chemistry is defined as the difference between the exact energy and the half default energy. It's often obtained by order by order perturbation theory. And we are not going to do order by order perturbation theory. One main reason being is that order by order perturbation theory often diverges. What we are instead going to do, we are going to sum a certain subclass of diagrams and hope that this will converge. And this is rather intriguing, but it's well illustrated by this equation here. Imagine that you sum this Taylor expansion here for different values of x. If x actually exceeds one, this Taylor expansion will diverge. So if x, this Taylor expansion is a convergence radius of one. So if x exceeds one, this is just going to diverge. Sorry, if x is larger than one, this is going to diverge, obviously. Interestingly, you still know an analytic equation that will actually give you a good value. This is the logarithm. Logarithm of 1 minus x plus x is exactly this Taylor expansion. And you all know you can type it into your calculator. The logarithm of 1 minus 1 dot 2 will not converge either. But actually, the logarithm of 1 plus 1 dot 2 will obviously converge. 1 plus 1 dot 2 will converge. And this one here will even not converge if you put in minus 1 dot 2 in squared. At least naively. And that's what happens actually. Standard perturbation theory would actually evaluate those terms order by order. And this would give you a divergent series, whereas the logarithm is a closed equation that converges. And you can use these tricks even for perturbation theory to re-sum the diagrams. So how do quantum chemistry do this? So what they do usually is they calculate the ground state orbitals. These are the so-called Hartree-Fock orbitals. Here we have eight electrons and the corresponding one electron orbitals are indicated by these red lines. So we have eight electrons which occupy the eight lowest states. Usually, and this is what we hope for, we have a small band gap and these are now our unoccupied orbitals. And again, I will use i for occupied orbitals and a for unoccupied orbitals. And I've already told you what the quantum chemists usually do. They actually expand the many electron wave function into these slater determinants. Ground state determinant, which is the Hartree-Fock determinant, which is our vacuum state again. This is a single excitation because you have removed one electron here and put it up in here. And this is a double excitation. You've removed two electrons here and actually put them up in here. The ground state orbitals can be obtained either from quantum calculations or from Hartree-Fock calculations. As a rule of thumb quantum chemists start from the Hartree-Fock description, which is excellent for molecules. In solids, this is a little bit questionable. In particular, for small band gap systems, Hartree-Fock is probably not always a good starting point. So we would rather start from quantum chem. You know that quantum chem DFT is very good. You've seen, I think, at some talks, for instance, the one by Ikele Parinello, how successful quantum chem DFT has been. So it's a good reason to start from quantum chem instead from Hartree-Fock. Actually, Hartree-Fock really fails for many solids. Now, what is the issue? I mean, there are many issues, but one issue I want to mention immediately, there's extremely slow convergence. And this is important if you ever come across a paper done by quantum chemists. It's extremely difficult to converge the correlation energy with respect to the number of unoccupied orbitals. So the number of unoccupied orbitals that you include, this is an added difficulty that you have, needs to be gigantic. And the reason is, and that can be proven on paper, the reason is that the correlation energy converges strictly like one over the basis set size or one over the number of unoccupied orbitals. That can be proven on paper. So this is important to keep in mind. So if whenever you read the paper by quantum chemists, you have to ask, how good was the basis set? Was it good enough to get you actually reasonable numbers? And that makes it so difficult. Nowadays, there are many tricks to deal with this, but the rule, and that you need to keep in mind, energies, and this also concerns quasi-particle energies. I'm not talking about this now, but the rule is that the energies, the correlation energy, the correlation energy is not the exchange energy. The correlation energy converges like one over the total number of basis functions, irregardless of the basis set. Plane waves, atomic orbitals, it's a strict rule that you can prove on paper. The reason for this is actually this stupid little casp here. This casp, in particular for non-equal spin, this is difficult to converge. The casp makes in the wave function a discontinuous behavior in the derivative, obviously. And that is almost impossible to converge, and you can prove on paper that this causes this small base, this low basis set converging. Now, the first guy I think, one of the, I mean, this is very long known. I mean, Kutzelnik, we also proved it for some very approximate correlation functionals, and there are a couple of papers with it ourselves, but you can actually count probably hundreds of papers where this issue is discussed. Actually, we had a nice in the algebraic equation that involves just the charge density. That's quite remarkable. If you, if you actually are working a little bit out the equations, then you can show that it is related to the charge density, and this equation needs to be integrated, and then you see again it's one over the basis set size. It's not so important. So, first things first. Is there a way to exactly solve the Schrodinger equation without further approximations? Let's go back here. The problem, so first of all, basis set convergence is low. Second, and equally troublesome, is that the number of excitations that you can have, single excitations, double excitations, triple excitations, quadruple excitations, in this case you need to allow up to eight electrons to be excited from the ground state to do it, to unoccupied orbitals. And if you count the number of coefficients in the wave function that you need to calculate, it comes up for this simple case. If you have 32 orbitals here, 32 bars, how can you distribute eight red lines onto 32 black bars? Well, this is 32 over 8, so this gives you 10 to the 26 coefficients. That's crazy, right? So, you have 10 to the 28, 10 to the 26 coefficients. That's not going to work. You cannot store these coefficients. This is again what I told you already. The manual for the Schrodinger equation is essentially unsolvable on any computer, maybe on a quantum computer. But, in practice, you can simply cannot store the wave function. So, this is here captured again. There's one method to do it. That's obviously Monte Carlo. And there's one method, one Monte Carlo method that actually even avoids any problems, the so-called Fermi and Stein problem. And the idea of this method is extremely simple. So, imagine that you have, I mean, you have been told about this method, so I will try to keep it short, but I will give you my kind of that explanation for the method. So, you have 32 bars here. These are each one, each of the bars where it presents one electron orbital. And all you need to do is you need to distribute eight ones or eight red bars onto these 32 black bars. You can do this by taking a bit string with 32 bits and just putting in eight one digits into this bit string at any place you want. This guy is the walker that you will use in the Monte Carlo procedure. So, these bit strings are the quantities that you are now allowed to move in the Monte Carlo procedure. Or, more precisely, you are allowed to move the ones from, so the digit one from one place to a place where there was previously a digit zero. Why do we need only digits zero and one? Yeah, because we have fermions. So, the only way you can add is zero electron or one electron. Right? So, this is according to the rules for fermions. We discussed this for second quantization. This is exactly what this impets. So, the only thing you are now allowed to do, you're allowed to actually move one digits to places that were previously zero digits. Actually, these are the walkers. You will have many of those walkers, trillion of those walkers, trillion of these 32-bit strings, and you will move the one digits around until you find some kind of equilibrium population. Actually, on some determinants, like the Hartifock determinant, if you have let's say 10 to the 17 walkers, no, this is a little bit too much, 10 to the 12th walkers, half of the walkers will be sitting at the Hartifock determinant because it's so important, right? That would be inefficient, therefore you also introduce counters to count the number of walkers. So, one walker consists of a bit string and a counter how many walkers are present on this particular determinant. Again, because probably if you tend to the 12th walkers, one half of these walkers sit on the Hartifock determinant. So, how do you move the walkers? Well, this is the Schrodinger equation in imaginary time and it's very appreciated if you propagate your population density according to the Schrodinger equation in imaginary time, you will finally obtain the ground state wave function. This is a scheme that is used in many Monte Carlo procedures in propagating imaginary time and you will come up to the ground state wave function. Actually, this guy here is a control parameter. It turns out at the end of the day to be either the sum of the exchange energy and the correlation energy or just the correlation energy. That's a little bit definition how you define this here. Anyway, this guy is a control parameter. If it's above the two ground state energy, the number of walkers will increase. If it's below the two ground state energy, the number of walkers will decrease. So, you have to adjust this until the worker population is stable and then you have your correlation energy. This is very simplistic and nothing special. So, the only thing you need to do is you need to kind of propagate the ones and zeros from in time. And that you do actually by evaluating these guys here, the Hamilton operator between different determinants essentially. So, each of these walkers obviously presents one possible determinant and there's then simple rules how to propagate the ones and the zeros from one time step to the other. That one is a little bit technically, it's not very difficult. Essentially, you just need to evaluate the Hamiltonian, the matrix elements of the Hamiltonian between two possible later determinants. It turns out because the Hamiltonian contains only one and two electron operators. Well, this is, again, I mean, what kind of operators does it contain the Hamiltonian? It contains this one, this is a two electron operator, and this here is a one electron operator. At least that's the normal closure. It turns out it's very easy to propagate the ones and the zeros. Good. So, this is our reference method because it solves the Schrodinger equation exactly, right? The problem with the method and that's the catch, at least according to Ali, but some people do disagree with him, I should mention. Obviously, the Schrodinger equation has two solutions. H psi equal e psi has either a positive design solution or a negative design solution. Any eigenvalue equation has always two solutions for the orbitals. Well, actually infinitely many solutions. One positively, positively signs manifold and a negatively signs manifold, right? So, if you invert the sign, you have another solution that is, actually you can scale your wave function by any quantity, but that's not important. There are certainly two possible signs. And what can happen in your simulations is that you have two basins, kind of a phase transition between the phase that has the positively signed solution and another phase that is the negatively signed solution. Then your correlation energy is garbage, then the method doesn't work. So, you can imagine this as a graph. So, you kind of have a basing where you have the positive sign solution and the basing where you have the negative sign solution, so kind of phase separation. And then, in this case, the quantum Monte Carlo procedure will give you nubs for the correlation energy or inaccurate values. And the only way to avoid that is to use many, many workers. So, you need still, the number of workers still needs to grow slowly, combinatorially, with the number of electrons and the number of orbitals you have. And that's your problem, so you haven't solved the fermion sign problem. You've just mitigated it. So, this is, this, so we have an exact method. The catch of the method is that you cannot do probably something you can maybe do nowadays something like 100 orbitals, maybe 50 electrons. 100 orbitals or 50 electrons in 100 orbitals. This is the maximum you can do. It's not great and it's not good enough for solids. That should be clear. Actually, for the model Hamiltonian community, this is pretty, pretty large. But for first principles, it's totally inadequate in particular because I've told you that the basis set convergence is very slow. So, this in combination with these maximum system sizes you can do really limits the usefulness. But anyway, it's a means to calculate exact correlation energies in solids because you can also solve the Schrodinger equation for the full CI if you want, so for things that have been previously impossible. So, the alternative that we have is perturbation theory. The thing I told you before, right? Wicks theorem and everything. And let's, let's build up perturbation theory more on the basis of some gut feelings, yeah? Without too much difficult algorithm. What is first order perturbation theory? Everyone should know this. First order perturbation theory tells us you take, in this case, a set of orbitals, for instance, DFT orbitals, and then you take these orbitals and evaluate with these orbitals your energy. That's what you always do in first order perturbation theory. You have a set, a wave function, and you plug that wave function into your new functional for which you want to evaluate the energy. That's first order perturbation theory. That's often written down in this manner here. So, if you have an unperturbed wave function, psi 0, psi 0, you just evaluate H0 plus H1. So, this is our perturbation. So, we keep the wave function fixed and just evaluate. This is obviously the fully perturbed, well, let's call this delta H1. Well, this is the full Hamiltonian when we have switched on the perturbation H0 plus H1. So, this is the full Hamiltonian, and if you have already the expectation value of this guy, you have to essentially first order perturbation theory. This is really simple. So, all we need to do is, if we start from DFT orbitals, for instance, only, the only thing we need to do is, you need, we need to plug in the DFT wave function into the many-body Hamiltonian to get the first order term. And now, I'm not going through this, but I'm using diagrams here, because Hartree-Fock should be familiar to you. What you essentially get, obviously, is the Hartree-Fock energy, but evaluated for DFT orbitals, right? So, this is exactly the Hartree-Fock energy. They're exactly, but with one difference than the usual Hartree-Fock energy, you have to use your DFT orbitals. And the only orbitals that come up here are the occupied orbitals, actually. So, the sum can be restricted to the occupied orbitals, because only those are important for densities and density matrices. Actually, this will involve these two diagrams. Again, I assume that you are familiar with Hartree-Fock, but I will briefly show you the diagrams anyway. This here's the Hartree energy. I've already showed you these closed lines, these closed circles are only involved sums of occupied states. And we come back to this in a few slides. This here is, this here actually involves the orbital at position i, so at position r. Well, I will route it down already now. So, this closed line is a sum, and you can't read that, right? So, I need to clean a little bit up. So, this line here, let's think about this rule. So, it has an incoming vertex here, and it needs to be summed over all occupied states. And the incoming vertex means phi i. The outgoing vertex, which is the same as the incoming vertex, is the complex conjugated. And the position here is r. And if you sum this over all states, it's just the charge density n of r. The same you need to do on the left side that gives you the charge density at r prime. And then you have the Coulomb potential between those two guys. So, this is exactly the halfway energy. This comes up if you do the many-body perturbation theory. And then you have also the exchange diagram. This is this one here. This involves the density matrix, Coulomb potential, and again the density matrix. I will come back to this in a minute. So, this is the simplest case. Let's do first-order perturbation, or second-order perturbation theory, and allow for single excitations. So, we now expand the wave function with the first-later determinant. And now we actually allow for a single excitation where we actually create a hole here and put the electron up here. Okay, first-order perturbation theory. Linear response theory is also quite simple. Linear response theory is essentially you switch off the mean field, how to evoke Hamiltonian, and you switch on the many-body Hamiltonian, right? This is our ground state wave function. This is our perturbed wave function where we have allowed the excitation to occur. This is the ground state energy, and this is the energy of this later determinant. Well, this later determinant has a higher energy because we have created a hole here and put the electron up here. So, this costs energy. The energy that it costs is actually the energy of this state, sorry, the energy of this state, epsilon a minus epsilon i. And this is nothing but second-order perturbation theory, rather straightforwardly, that you probably have seen. I hope you have seen second-order perturbation theory before. Now, the only thing that is not so trivial is what is the mean field Hamiltonian in this case? Oh, sorry, what is the many-body Hamiltonian in this case? In principle, we need bandwidth theorem and a lot of thinking. But intuitively, this should be something like the Hartree-Fock potential. Why is it the Hartree-Fock potential? Because we keep the orbitals for the time being fixed, and if you do first-order perturbation theory, it suffices to actually assume that the orbitals are still fixed, and then actually you're perturbed the exact many-body Hamiltonian for a single later determinant is just the Hartree-Fock Hamiltonian. So, this here, this guy, the exact many-body Hamiltonian just becomes the Hartree-Fock potential. Again, this stems from the fact that for the time being, we keep the orbitals fixed, and then evaluate the Hamiltonian. And for a single later determinant, this guy, the many-body Hamiltonian, is nothing but the Hartree-Fock Hamiltonian. And we assume what we switch off is the Cohn-Cham potential. So, we switch on the Hartree-Fock potential and switch off the Cohn-Cham potential. This is what we subtract. These are the blocks I had before. This is our term. So, let's look at this diagrammatically using Goldstone diagrams and give it some more feeling. These are the diagrams I've drawn before, and this is now my slightly revised drawing that is kind of a little bit more precise. So, we switch off the DFT, exchange correlation potential, and I draw this diagrammatically as this. This creates an electron whole pair. It's exactly this diagram here I've drawn here. So, we switch off the Cohn-Cham potential. That's why I put the minus here. We switch that off, and that creates this electron whole pair that then propagates in time. And this guy here is what we switch on. This is the exact exchange potential. The Hartree-Fock, actually the Hartree term cancels out because the Hartree term is both present in the exchange correlation, local exchange correlation potential, as well as in the Hartree-Fock. So, the Hartree term doesn't pop up. Actually what pops up is you switch off the DFT exchange, and you switch on the exact exchange potential. Then these holes are propagating, and then they are undulated. This is the second process. This here comes from the integration from zero to infinity, nothing new. So, if you integrate over all possible time differences from zero to infinity, and you put in your propagations, which are e to the, oh yeah. So, these two, this electron, oops, this electron whole pair propagates and gives you an energy e to the epsilon a minus epsilon e t, and you integrate that from zero to infinity, and you just get this denominator, one over epsilon a minus epsilon i. If you integrate over all possible time differences. Now, down here, we also have to put the difference between the exact exchange potential, this is the exact exchange potential graphically, and the DFT potential, the conchem exchange correlation potential. And then if you do this, this gives obviously four terms, you can have a combination of these two guys, which is this diagram here, you can have a combination of these two guys, which gives you that diagram here, and you can have the cross terms, so it's a total of four terms. So, what happens here is that we replace the DFT potential by the exact exchange potential. That creates part of the whole pairs that propagate, and are then unrelated. This is something like a vacuum fluctuation, a little bit, not quite, it's usually not called vacuum fluctuation. And what it really describes in terms of physics is how do the orbitals change when we switch from DFT to Hartree-Fock. So, this term, why this is linear response, we switch off DFT, we switch on Hartree-Fock, it describes you exactly how the orbitals change when we switch from DFT to Hartree-Fock. Linear response equation, very trivial. So, these terms describe nothing, but how does the exchange energy change when we switch from DFT orbitals to Hartree-Fock orbitals. That's exactly what this term captures. And this term again is called signals in quantum chemistry. Okay, the next term, and these are exactly those diagrams we had before. I mean, we derived it with Wicks theorem, which is much more rigorous. This was a little bit of hand waving. Wicks theorem gives you these terms. Actually, the point here is this block here contains both the exact exchange, as well as the Hartree term, and this here, this here is the Hartree term, while this term here contains the Hartree term, as well, and that kind of drops out in this perturbation series. Not all diagrams that I've done here will be actually surviving. So, as a brief recap, I want to talk about these lines here again. I mean, I know I have done it now often enough, but I want to repeat it again, yes. What are these lines? These lines are really particle propagators and hole propagators in the independent particle picture for experts. This is really all in the independent particle picture, and whenever you see diagrams in quantum chemistry, they're always involving independent particle propagators, not interacting propagators as they do in many Feynman theoretically drawings. So, these are always independent particle propagators, and they're really extremely simple objects. What you do is you project your many-body wave function onto a one electron orbital A. So, this is a projection in Hilbert space, right? So, you project onto A, then you propagate the orbital in time. Well, if you take, put an additional electron into the orbital epsilon A, this is kind of what the phase factor you get in the Schrodinger picture. This is the time difference T2 minus T1, because you propagate it from time T1 to time T2. So, this is the phase factor. Here, subtract it, and this is often convenient, the Fermi energy. This is mid-gap. This makes your signs a little bit easier to cope with. It's not really important, you don't really need to do it strictly speaking, but it's convenient to do it so that all occupied states are below the Fermi level, so you get negative numbers in the exponent, all unoccupied states are above the Fermi level, so you get positive numbers. It's convenient to do this. And the other guy projects onto occupied states, then propagates the state, and then pops in back the state. Okay? So, you do projection, propagation, and then you pop in back your state. That's essentially what these lines do. So, this is essentially a so-called one electron Green's function. You project onto the orbital A, you propagate it, and you pop back your orbital A at position R2. You project onto state I, an occupied state. This is a whole propagator. You propagate it, and you pop it in back. Now, briefly, I want to recap these two diagrams here. This is, again, the Hartree diagram, and I've already done this on the blackboard a couple of minutes ago, so you don't need to, it was not necessary to copy this. Now here, this is also a proper, well, the only rule that we need to observe is that all closed line, all closed loops like this, you only need to sum over the occupied states. That, again, falls from Wix theorem. Okay? So, it's not, well, if you do it gut feeling like you wouldn't know that you have to constrict it to the occupied state. If you do it Wix theorem, then you see it needs to be restricted to the occupied state. So, this is the guy that propagates from here to here. So, this is a propagation from here to the same point back, phi I of R1. So, we just copy this equation here, and then we set T1, then we set T1 is equal to T2. So, we have equal times here. If this is at equal time, that's obviously, that's zero. The exponent is one, and that's obviously the density, right? So, this is the charge density, and the Coulomb line creates your potential. So, V2, VR1 is exactly the Coulomb potential. Now, this line, the exchange, I want to show you also that this is kind of what people, what you know, that this is really Hartree-Fock exchange. Let's go through this line here. So, again, this is, Coulomb lines are always at equal time. So, time one is equal time two. So, again, the time is here the same. So, this is zero. This is the orbital at position R1 here, phi at R1. This is how you translate the diagram. So, this is one here becomes here an R1. This here is the complex conjugated at position two, and you sum over all occupied states. And this quantity, so this here is obviously, this is one. This quantity is your density matrix, gamma of R1, R2. Now, you contract over the Coulomb potential one over R1 minus R2, and this here is exactly the exact exchange potential. The density matrix gamma of R1, R2 divided by R1 minus R2 is exactly the Fock potential. So, this now hopefully makes sense. This graphically is this really exactly the exact exchange potential, and here in these diagrams, these things are the Hartree-Fock. These are the Fock potential, and this here is the Hartree potential. So, this part here is the Hartree potential, or catering at the Hartree potential. This guy is catering at the Fock potential. So, these were single excitations. Double excitations, let's move quick forward. Double excitations are those here in the Wicks here, and they are also second order. These are from the singles, from the single excitations. So, there's a different way of subdividing your diagrams in different classes. I mean, there are different ways to count your diagrams. Quantum chemists usually distinguish between singles and doubles. In standard many-body perturbation theory, they have the same order. So, they pop up in Wicks theorem at the same place, in second order. But quantum chemists like this division into singles and doubles. So, what are doubles? So, double excitations are excitations where you excite an electron from here to here, and another electron from here to there. Now, this gets cumbersome. So, this is already a point where if you don't use Wicks theorem, you're pretty lost. So, it's pretty cumbersome to actually derive the diagrams. It takes probably already two, three pages to do it. But I've told you how to do it. Actually, these diagrams are just involved, actually, two Coulomb lines with the vertical link. And then you have to connect your vertices. So, you have to allow for electrons and holes lines to be attached here, and they're attached here, electron and hole lines here. And then you need to just connect, make all possible connections. And this makes up only for these two diagrams. There's not more you can do. So, these are the two important diagrams. And I've talked about this already so often that I'm kind of getting, not here, but in other audience. So, what do these diagrams mean in terms of physics? These are fluctuations, actually, the simplest class of fluctuations. Okay, let's think about what happens here. So, actually, you put an electron. So, you create a hole in a previously occupied orbital. So, you create a hole here. My time is actually over from the top to the bottom. So, this here is a hole line, and this is a particle line. So, you actually create a hole. This is this hole here. And put your electron into an orbital A. This is this orbital here. So, electron hole. So, this describes, let's say we have here a nucleus. This describes, for instance, such a process that you create an electronic state on the above the molecule and the hole below the molecule. So, these kind of excitation processes are included. An electron here, and you create a hole here. And this second here is very similar. So, you have now a hole in the state J. And you put your corresponding electron in the orbital B. So, you have another electron hole pair. And let's imagine that this other electron hole pair is on the second atom. So, you have now two electron hole pairs. And this is exactly what I told you is important for van der Waal's interaction. So, including these diagrams, actually, we take care of van der Waal's interaction. And one other thing. It also takes care of this cusp condition that I've drawn before. Oh no, I will not go back too far. So, this is really the most important correlation effect. The second one is this screened exchange where particle hole lines are crossed. So, the particle created here is crossed with the particle created here. And here, annihilation occurs crossed. This has no classical analog. This is not a dipole-dipole fluctuation. This is really a dipole-dipole fluctuation. This is something really cumbersome. It's something that you have for fermions, yeah? It's really related to the anti-symmetry of the wave function. Second-order screened exchange. So, we have now seen all second-order terms. These are the singles. These are the double excitations. It's time to look at the simple perturbation zero and P2 for solids. This is the simplest way you can do perturbation theory, lowest-order perturbation theory. Now, we have a way to compare it to our exact full CI. And that's what we essentially do here. So, we compare with the exact full CI. That's the Ale-Alawi method. And compare second-order perturbation theory for a solid with the exact result. That was previously not possible because there was no reference. There was simply no way to calculate exact correlation energy for solids. In very small basis sets, it was impossible. What you see immediately, that MP2 is not great. Actually, MP2 works greatly for neon. Kind of works for argon, but then it goes down the hill. It gets worse and worse. And the reason is, if neon has a huge band gap, 13 electron volts, and that's a case where you can apply perturbation theory, low-order perturbation theory. Actually, as a rule of thumb, these methods work greatly for small molecules that have large band gaps. It works also great for water. Water has a huge band gap, 8 electron volts, if I remember correctly. Well, it depends. Six, it's a little bit complicated. Anyway, by and large, water has a large band gap. So, MP2, a low-order perturbation theory, that goes from how to talk to the many-body solution using perturbation theory, is supposed to be very precise. So, ice, small molecules. So, if you ever encounter literature data, MP2 will do a great job if you have a large band gap. This includes also perovskites, SiO2. A lot of materials actually do have amazingly large band gaps. However, as the band gap shrinks, the perturbation theory, the simple perturbation theory breaks down, and you get worse and worse results. Actually, you can prove that the correlation energy starts to diverge as the band gap shrinks. In second order, you get an infinite correlation energy from this term here. So, this term here really, indeed, becomes infinite in perturbation theory. That already tells you, forget about it for anything like a metal. Okay, I will skip this. So, what is important in solids, and that's why solids are really different than small molecules, where these quantum chemistries chemistry methods are often used. The reason is that if you have an electron here, imagine somewhere in space you have an electron, and at another point in your system you have a hole. How will they interact? Well, they will interact by screen interactions, because all the other electrons will kind of try to screen the potential created by this electron. So, this electron sits here, but all the other electrons immediately will say, ah, yes, I go there, yeah, and try to screen it like a cloud. And that means these kind of processes need to be taken into account, yeah. So, the interactions really, how do you say, mediated by the medium by the other electrons, screened by the other electrons. And actually, a guy who actually realized this is New Zealand Pines already in the 50s, I told you, Mp2 diverges for metals, and they actually came up with this method to calculate the helium electron gas. So, they had the idea to re-summarize the perturbation series. So, this is again exact perturbation theory. All you do is, you not only take the second order diagram, but you take a few of the third order diagrams, fourth order diagrams, fifth order diagrams. And the rule which one you take is very simple. You take those bubble diagrams here. So, what does it mean? Here you create two electron hole pairs, one here and one here, then they propagate in time, and then they are annihilated. So, the annihilate, the energy is put into a photon. The photon travels a little bit, and then creates there another electron hole pair, chink. And only then you have final annihilation. This here is a fourth order diagram. So, you create two electron hole pairs, one propagates in time until it's finally annihilated, and the other one is annihilated here, photon, creates another electron hole pair, photon, and so on and so on. Of course, you can have all kind of permutations of these diagrams. Anyone that have this bubble structure, you can actually do all these are included. So, that was their idea from Nozier. He just thought, well, let's try this approach to the Chellium electron gas, and restrict our diagrams to a very simple subclass. The weighting of these diagrams is exactly consistent with Wicks theorem. So, he didn't do anything new. He just removed from the many diagrams that you can have. Actually, you can have so many diagrams, he easily settled by the number of diagrams that you can have. So, what he did is actually he restricted it really to those bubble diagrams and took the coefficient, the weighting of those diagrams exactly from any body perturbation theory. And it turns out that this really involves a logarithm. Actually, the point here is that these diagrams, let's go back here, these diagrams here, this diagram here, has a different meaning. It's actually the polarizability. So, imagine the photon comes in, then the photon can excite an electron whole pair. This is what happens if you have optical absorption. So, if photon comes in, you absorb an electron, you create an electron whole pair, you lift an electron from the occupied states to the unoccupied state. Then the electron whole pair travels in space and later a point, it actually re-emits the photon. This is exactly the polarizability. So, this diagram here really is a sketch for the, it's exactly the polarizability in time. Nothing really special. It's really exactly the polarizability, the power of Goldstone diagrams or Feynman diagrams. So, this here is the polarizability. These are the weights that you get from body perturbation theory. The weights actually here are from Feynman diagram, optical representation, so they are not trivial. And then you sum this series and you get a logarithm. And here is the cap, here is one thing you have to keep in mind. If you would calculate this term in the metallic diverges, positive value, if you calculate that term, it diverges negative value. If you calculate that term, it diverges positive value. Diverges with alternating signs. So, this order-by-order perturbation theory would fail. The resummation, you could call it renormalization, but it's not really a renormalization. This guy here converges. And that's already what Nozier realized. So, Nozier essentially realized this alternating series can be algebraically summed and gives you a convergent contribution. You can do this now on the computer. You can calculate the polarizability and calculate with this equation here, correlation energies that are useful for materials or metal points. So, now this is nice, but the RPE, so this is a guideline what you should do. You should sum an infinite class of diagrams. Otherwise you are killed. And of course the quantum chemists were again way ahead, the physicists, in adopting those methods. And the most important method is actually a method that is called Capit cluster method. So quantum chemists were smart enough already in the 70s to find a method that will actually do this kind of resummation, but in a far more broad sense than Nozier did. In the 50s. Well, actually to tell you the truth, these methods, the Capit cluster methods were invented by Hermann Kümmel and Fritz Köstner. And these were actually nuclear physicists. They even didn't publish a paper, it seems. They just came up with the method. So they had this idea and they will try to give you the gist of it. The idea of how this works, but I cannot give you all details. This is a method that turns out to scale algebraically with the number of electrons. Don't forget the CI. The full CI is a combinatorially scaling method. So it's exponentially scaling. This method scales algebraically in system size. And it really kills, therefore, this NP-hard problem. So what happened actually? These guys invented this method for nuclear physics. And the bad point was that it didn't work for nuclear physicists. So it gave no sensible results. And you know why? Because nuclei are strongly correlated. This method is still a resumption of the perturbation series, so it's strictly limited to weakly correlated systems. But that's exactly what solids mostly are. That's exactly what molecules mostly are. So this guy here, C. Shek, he actually started in quantum field theory and thought, well, this is a little bit useless. So he realized that this is going to become a sun, even in the series. But he still read the papers. Now, I can't find the original reference, but somehow he got aware of the paper of Cumulon-Köster. And he read it and he found it interesting, so well, we should try it for molecules. And that's what he did. So let's look at what this method does. And the method is actually not standard perturbation theory. It's really a different idea. But again, it gives you exactly consistent energies with standard perturbation theory. So it doesn't look like perturbation theory on the first site, but it can prove that the weighting of the diagrams is exactly consistent with weak zero. So the way the diagrams are summed is different. But the class of diagrams you sum is different. It's a very special way to sum a certain class of diagrams. But the weights that you have in everything is entirely consistent with standard perturbation theory. It looks very different, the ansatz. This is our CI ansatz. Okay, now this here is the CI ansatz. What does it tell you? This here is the Hartree-Fock determinant. And another way to actually write the singly excited state is by putting a hole, this is the annihilation operator. So this puts a hole into the state in the previously occupied state i and puts an electron into state a. This is the singly excited state, singly excited determinant that I've drawn here. So this guy you can also represent as and write down as the ground state Hartree-Fock determinant, psi zero. And you can act onto this by annihilating an orbital in the state i and adding an orbital an electron in the state a. So you can rewrite this in this manner. This is the singly excited state and this is the doubly excited state. So you create the hole in the state a in the state j. Put the electron in this in the orbital b. Then you create a hole in the state i and put an orbital in the state a. Now the trick they did is to put all this into the exponent. So they did the same expansion here, but placed the coefficients as well as the excitation operators in the exponent. Okay. Now if you expand this small, for very small t i a's and t i j a b's. So if these guys are small, it's equivalent to this here. Right? Yeah, just do the tail expansion of the exponent. You see it's immediate equivalent. But of course if they become sizable, it's different. And the ingenious thing about this ansatz is that it allows to sum an infinite class of Goldstone diagrams. Essentially it sums, for instance, all these diagrams here by using this particular ansatz. It's really very, very smart. Similar ideas have been used in the Green's function community much later, I believe. Essentially the so-called cumulant expansion is something related. Maybe you've heard about this. Maybe not. So how does it work in practice? At the end of the day, you do a lot of algebra. So it's really, I mean, this takes probably like 20 pages to walk through the algebra. So it's really useless to walk through that. At the end of the day, for instance, well, one thing I should say, you then still need to truncate this expansion either at the singles or at the doubles or at the triples, right? So you still need to truncate this expansion somehow. And that's what you do. Actually the most common approximation is doing truncated at the doubles. So you go not any further than these two terms. These are the double excitations, I already told you. Are these all? We first discuss the single excitations, then we discuss the double excitations. So these are the double excitations. And essentially you take those double excitations into account, but often you terminate after the double excitations. And if you then walk through the algebra, you obtain a closed expression for these amplitudes here. For these guys, you get a closed expression and I've used no tensor notation. And this is essentially a quadratic equation that you need to solve to obtain these coefficients. Actually, for educational purposes, I will just remove those two terms and just look at this here and what it will do to you. So this is again the same equation as I have here. B plus AT is equals 0. And what I will do, I will take off the diagonal terms of this A matrix and put it onto the other side. So I take this A diagonal times T to the left side, put that to the left side, then the remainder is B plus A prime. A prime is obviously matrix A minus the diagonal part to the left side. So this is an equation that you are going to iterate. You do a simple Jacobi iteration. So we actually take the inverse of the diagonal and put it on the left side. A to the minus one diagonal times B plus A prime T. It's essentially the equation we are going to solve and we start with T equals 0. This is the starting point. Once we have this, we plug this in back in here. Then in the next step, we therefore get... Yes, we start with 0. That's okay. In the first step, we get this term here. So because this is 0, this drops out, right? So we get this term here. And then we plug this term in back here and make another iteration. And then we can build this up iteratively. And this equation is really used to solve the couple quest. Test equation, it turns out that in the first step, you get the second mp2 energy. So these guys here are the T matrix elements that we calculate. So this guy is what we calculate. This is this tensor here. It's a core... This is a tensor that has actually coefficients Tijab. So it has actually four indices. It's essentially this guy here that we calculate, Tijab. And that's... This guy is the one we iterate. So these are the guys that we iterate. And this guy has indices for occupied as well as unoccupied states on the left side. And on the right side, it has also indices for the orbital JP for this combination as well. In the first iteration, it turns out it's just mp2. It gives you the mp2 energy. Now you plug it in. You plug it in into the next iteration. This iteration, you have to actually calculate A prime. And it turns out it creates you this set of diagrams here. So this is the first order diagram. And here it creates a huge number of diagrams. Then you do another iteration and it creates even more diagrams. So these are the third order diagrams. It creates an even larger set of diagrams. Well, it's a little bit complicated, but I will give you the final message, what it does. Actually, it sums all diagrams, all Goldstone diagrams, that can be characterized by having at most two electron-hole pairs at any time point. Okay, I will repeat this. This method is capable to sum all Goldstone diagrams that at any time point have two electron-hole pairs. Look here, this is one of the diagrams that it does summarize. You create two electron-hole pairs that are kind of propagating through your electronic system. Then they annihilate it. And then at the same time, you create a new electron-hole pair here. And then this is allowed to propagate and annihilate it. So this here, if you draw a red line at any point in time, if you draw a red line at any point in time, it cuts at most two particles and two whole lines. It also includes other diagrams. Let's look at what other diagrams include. This diagram here is a quite nice diagram. What does it describe? Well, it describes the following thing. We have an electron propagating, and then the electron emits a photon. And the photon is re-adsopped here by this electron. This is a so-called particle-particle ladder diagram. It's one of the many diagrams we can draw in third order. Obviously, it fulfills all our rules because an error, I've forgotten to draw the error, an error comes into this vertex, an error runs out. Here, an error comes in, an error runs out. So this is really an electrostatic interaction between two particles. Two particles are flying and the kind of experience the Coulomb repulsion. This is what this diagram does. The same or related diagram is here. Two holes are interacting by a Coulomb interaction. So the hole is flying around. So this is a hole in the previously occupied state. Then it emits a photon, and the photon is re-adsopped by another hole. Whole-hole ladder diagrams. We have also particle-hole ladder diagrams that are included here. So here an electron-hole pair flies and then it exchanges energy via this photon here and then it continues its travel. So all these diagrams fulfill the criteria, the criterion that if you draw a line, you cut at most two particle-hole lines. So it's an ingenious method to sum an infinite class of Goldstone diagrams. But the rank, the rank is called the number of particle-hole lines that you cut. The rank is limited to two for this CCSD. So actually I've done a little bit of an approximation, but this linearized coupled cluster double is a method that essentially summarizes all diagrams that cut at most two particle-hole lines. This includes the RPA, and there is another RPA variant by Whiteau Young. It's actually called Particle-hole RPA. This is also included in the coupled cluster single doubles. Also T-matrix approach is included in there, yeah? I think with Whiteau's class of all the diagrams... No, there is no a priori justification. You have to actually compare with full CI at the end of the day. That's the only way to do it. You cannot tell a priori whether it's suitable. The problem of... Yeah, we'll come back to this in a minute. You will see evaluations anyway. So there is no a priori guarantee that it will be a good method. Because you restrict the subclass of diagrams. But it's a very smartly chosen subclass of diagrams, and you can include actually... If you go actually, let me just put this here. You can go up to CCSD parenthesis T or CCSD T, which then includes three particle-hole pairs propagating in time. And you can go to four particle-hole pairs propagating in time. And it seems like this is a very systematic way to get extremely accurate correlation energy. So it seems to impact the important physics. But it will fail for strongly correlated systems. I don't want to be misunderstood. It has its limitations. And what happens in strongly correlated systems? That seems to be pretty well known, because people have done the Hubbard model with a couple cluster methods. And it turns out in the Hubbard model, you need probably to include something like 15 electron-hole pairs propagating at the same time. But then only locally. So that's quite interesting. So it seems the more correlated the system is, the higher the rank of the method must be. The rank is again given by the number of particle-hole pairs that you propagate. So if you go to strongly correlated systems, the rank needs to go up. That's evidence that... At the same time, you are throwing away terms of higher order and terms that you are using at the same time what you choose. Again, I mean, I can only answer, there's no rigorous proof that this is going to work. And it didn't work for nuclear physics. It doesn't work for all cases. So you have to compare to higher level methods. That's the only way you can actually proceed. So some things are important here. Actually, it turns out that all diagrams are properly anti-symmetrized by this method. So you have a particle-hole pair coming in, emit the photon, and the particle-hole pair coming out or hanging out. In the fermionic system, you should then actually... So this is kind of, this is the same diagram here. But there is another, a second way to connect the diagrams. Actually, you can run in here and continue out here. So this electron that comes in here can continue out in here. And this hole coming in can run out here. So this is this diagram here. So actually, whenever you include this diagram, you should also include this diagram. This is the very nature of fermions. So actually, all diagrams should be properly anti-symmetrized. This method properly anti-symmetrizes all diagrams. So for instance here, at this point here, the related anti-symmetrized diagram is that one here. I haven't drawn the anti-symmetrized diagrams of these particle-particle letters and the hole-hole letters, but they are also included in the method. So the method actually is a proper method for fermions. This is another bonus. You can increase the complexity of the method. CCST is only ranked two, two particle-hole pairs. You can go up to the triples, contains three particle-hole pairs. So, and the question, maybe I've forgotten one thing to tell, yes. Again, you should keep in mind, I mean, you're completely right that you should, I mean, why do you truncate it like that? The reason is again that it is a method that is tractable on the computer, right? So the only reason you do it is that you get a tractable computational scheme that scales only in this case, cubically resistant size. And if you go to this CCST parenthesis t, it scales with the seventh order of the system size. And we have to evaluate it. I mean, there's no other means but to evaluate the method because we have thrown away an infinite class of diagrams. So what we do here is, and this was a paper with Ali Alavi, we compare it with the full CI. And what you see, MP2 was not so great. Actually, MP2 tends to over-correlate to negative correlation energies. And this term here is CCST. This is this method here. It scales like the sixth power of the system size. It's kind of good, but not great either. And if you now include the triples in this case only perturbationally, you get finally an answer that is pretty much on top of the full CI. And again, there is no other means of evaluation but to compare to more precise methods. That must be understood, yes? Again, I've told you that there is a lot of evidence that this method definitely does not work for a strongly correlated system. As systems become more strongly correlated, you need to actually go towards a higher and higher rank. And that's computationally simply not feasible. So we can compare this experiment. And this is very quick again. So the comparison CCST parenthesis TBIS experiment shows you that actually we get pretty much values that on top of the experiment, we see in about one to two kilojoules or more. So these are atomization energies. So if you take the lithium hydride system and atomize it to lithium atoms and hydrogen atoms or take carbon and atomize it to carbon atoms, boronitride to boron atoms and nitrogen atoms and aluminum phosphate to aluminum and phospho atoms, then actually you get almost always almost perfect agreement with the experimental cohesive energies. So this here, this full CI calculation, you should be aware of this, was with an extremely limited basis set. So we actually included two by two by two k-points and just eight orbitals in our expansion. This is a ridiculously small basis set. So this basis set will not allow you to calculate thermochemistry. So it will not allow you to calculate atomization energies, whereas the basis sets we use here include 200, 400 plain bits. So modeling reality is always about compromises. So you need to make compromises to make computationally feasible schemes. So this is a slide that I've taken from Nemitz Tower and Neitz. And what you see here is a comparison between experiment for a large class of molecules using exactly the same method. This is much older work than ours own, using exactly the same method, CCST parenthesis D. This is this line here. And they have done a lot of molecules and as you can see here, for all these systems, the error is very, very tiny compared to experiments. So this really is an up-and-issue method that seems to be capable to describe correlation energies exceedingly accurately for condensed, well, for condensed meta systems as well as small molecules. And again, it will not work otherwise. Whenever you have a strongly correlated system with strong correlation effect, it's not going to work. What's the criteria whether it works? Actually, there is a criteria. If you take the final wave function and project it onto the Hartree-Fock determinant, the overlap must be large. So only if the wave function, the many electron wave function, has a similarity to the Hartree-Fock wave function, this method can work and yield reliable result. So in a ball back, this is still some kind of perturbation theory and the exponential many body, well, recall we actually derived the Vixie and where we adiabatically switched from the non-interacting Hartree-Fock system to the interacting many body system. And this perturbation series will very often diverge and it will only work if there is some similarity between the ground state wave function and the Hartree-Fock wave function. And that's not the case for a strong correlated system. There's hardly any similarity between the Hartree-Fock and the ground state wave function. So now I think why is this a new thing? I mean the point here is that this is the first time we get obviously because there's no other method presently available that can do solids with that kind of precision. So this method, full CI is beautiful. I mean you have heard all the talks but I want to caution you, full CI still scales exponentially. So it's still an empty Hart problem or the empty Hart problem completely prevails. And this is really the point. So full CI is extremely powerful but it's really limited to small system sizes like 50 electrons in 100 orbitals. It's tiny. You cannot do materials modeling and that's what I'm really interested in. It's really exponential in the number of electrons so n equals 50 is probably currently the maximum size you can do. Now this quantum chemistry method CCSD and CCSD parenthesis is the best we can do look extremely promising. So they can be accurate. I think they can be accurate to two kilojoule per mole and this is really without any parameters. Right? We start from the many electrons trading equation. We essentially solve it. The scaling is only algebraic compared to exponential here but there is a price to pay. Actually these methods are still exceedingly expensive. So these calculations for any of these materials took like 20,000 CPU hours for one single volume and two atoms. It's not material science. This is at best a little bit of condensed metaphysics. So think about what probably Michele Paranello has shown you what you can do with DFT nowadays. I mean it's ridiculously small what you can do with these kind of methods. 20,000 hours for two atoms in the unit cells. Ridiculous. Well these methods will be improved. I don't know why there are two selected diagrams. Ah yes. Well one can improve it. The killer here in this method is actually this class of diagrams here. These diagrams here where you have these diagrams are the killers. So you have actually a particle coming in, a particle running out. A Coulomb interaction that describes the interaction between another particle coming in and another particle coming out. That only this stupid little diagram kills your method because it's so expensive. Actually this is very easy to see. It involves obviously a sum over unoccupied states and another sum over unoccupied states and another sum over unoccupied states and another sum over unoccupied states. It involves four sums over unoccupied states. Okay. And that's a killer that makes the method very expensive. So the dominant scaling then is n to the force in the number of unoccupied states. And you need a lot of them. So I told you before the basis set convergence is strictly in any correlated method, one over the basis set size. So this year actually is 90% of the compute time just to evaluate this single diagram. So you have to work on this and that's possible. And there's a lot of exciting things to do. So there's a lot of work ahead of us to really make those methods routinely suitable for solid states. And I think it's worthwhile. This is for the first time that we can get highly accurate correlation energies in solids. Don't be cheated. If people tell you DFT, you get highly accurate correlation energy. It's just not true. I mean, if I'm telling you this, I've written a vast code, right? So you can trust me. If I'm telling you DFT is not great, you can believe me. I mean, there's always a function that will give you the right correlation energies. Always a function that gives you the right answer. But it requires a not state to turn until you get the right answer. I'm still a big fan of DFT, but we absolutely need these more precise methods if only for benchmarking. And I think this is a field of enormous potential. I mean, we're just seeing, we're just at the start of this really emerging field quantum chemistry for solids. But it's also clear we need something cheaper. This 20,000 hours for two atoms in the unit cell is just ridiculous. So we really need poor smell method. I'm probably going for a coffee break now. I'm just giving you an idea of what these methods will actually involve. That the central idea is really to constrain yourself to the most important diagrams. And that can never be as accurate as CCST parenthesis D because it's even less diagrams than CCST parenthesis D. So anyone who says, well, if you take him less, well, is this useful? Yeah, it is useful if you need to find a compromise between computational efficiency and accuracy. And that's always what we seek for. I mean, it's nice to have a method that is exact, but if it's untractable on our present-day computer, what's the point of having it? So I will try to convince you that this is another method, the so-called random phase approximation that already, as I told you, is going back to nozier and pines, is worthwhile pursuing and gives you very nice correlation energies for many materials. So anyway, since coffee break is essentially in, I will make a stop here and finish after the coffee break with the last part of my talk. Thank you.