 So I welcome you to the final talk. I still have some remainder to talk about. I mean, obviously, it's nice to have a method that takes 20,000 hours. But for two atoms, so this is really ridiculous. I mean, you don't want to have something like that. So it still means to do simplified methods. And essentially, I will tell you about one method we kind of trying to promote in the course of the last years, which is kind of reasonable for cohesive energy. So the idea is very simple. This is a posteriori. Now, how I presented, the thing has emerged over the last 10 years as a kind of feasible scheme. And a lot of people contributed. Philip Purcher, our own group, the group of Matthias Schaeffler, all we have really tried to come up with some schemes that are doing a reasonable job. They will not compete with very accurate methods. But they should give you some insight and some, let's say, less ambiguous description than DFT. The idea is very simple. And what you have here are essentially all second-order diagrams. So this is the direct MP2 term. Electron-hole pairs are created and annihilated. This is the cross diagram. And these are the single contributions. So essentially, let me check again why this other guy doesn't work here. So maybe, ah, OK. It's back again. Great. So essentially, the idea is very simple. I've told you in material, you have to do something about the Coulomb line. You have to replace this bare Coulomb line by the screen Coulomb line. This is what I told you before. If you have a particle here, then all the other electrons will screen the interaction. So we are not allowed to use standard second-order perturbation series. This here would be MP2. These would be the single contributions, which, by the way, turn out to be 0 if you start from the autofocus reference. So this is just second-order perturbation series. This is not going to work. So we have to actually replace this bare interaction by the screen interaction. This is a little bit easier to draw it like this. So we replace essentially one of the Coulomb lines by a screen Coulomb line. And we do this for all second-order diagrams. That's the central idea. I mean, we need to be a little bit more precise. Of course, the weight needs to be entirely consistent with many-body perturbation series. So this is the second-order term. The third-order term is this one here. And this is the fourth-order term. And the weights that we have need to be exactly those weights that you would get from quick theorem and from minor-body perturbations here. Only then we can expect a sensible measure. Why do we select those diagrams? Of course, this is ambiguous, but it makes sense to include all second-order diagrams. I think if you throw away any of the second-order diagrams, the method will not work. And replacing the bare Coulomb interaction by a screen Coulomb interaction is exactly what Nosier already suggested for the Jellium system. It will take care of the strong screening in materials. It's not so important for small molecules because there is very little screening. But for these systems for materials, you actually need to take care of the high-order thing. So let me go through these contributions. It turns out if you quantitatively evaluate them, then the most important contribution is the random phase approximation. This is this class of diagrams, more precisely that class of diagrams. So it actually accounts for almost 70% of the correlation energy. Or it should be more precise. It accounts for 130% of the correlation energy. This is a little bit of a funny word. Actually, it overcorrelates. It yields too much correlation energy. This contribution always gives you too much correlation energy. So let's say the correlation energy should be minus 200 millielectron volt or minus 2 electron volt. Remember, correlation energy is always negative. You set out from Hartley-Fock, which is an alpha pound. And then you include correlation. You get lower in the energy. So correlation energy is negative. This guy overcorrelates massively, like typically like 30%. So you get instead of 2 electron volt, minus 2 electron volt, you get minus 3 electron volt. And this is not nice. But it seems that this is a fairly constant value that's more or less the same for each atom. So if you put atoms into a solid, the error is kind of constant. So it takes care. This contribution takes care of covalent bonding, metallic bonding. And I will show you also from the vile's bonding. The second term, there are various ways to actually include the second term. The second term takes care of the anti-symmetry. This term is related to turning this Coulomb line around. This is what I talked about before. So you have to anti-symmetrize any Coulomb interaction. So there's a second way to link the diagram here. So electron goes out here. And you can reconnect it to this line here. So you can reconnect this to this line and this line to that line. And that gives you this diagram. So it's really related to anti-symmetry of the many electron weight function. This term corrects for this over-correlation. Or this class of diagrams, I should be more precise, corrects for the over-correlation. So you have now minus 3 electron volts. Now you add one electron volt. So this is a positive value. It corrects for this over-correlation. Actually, it turns out that this term also reduces the spin polarization energy. So it's larger this term for the spin polarization energy. That's the energy difference between a non-magnetic atom and the magnetic atom. A non-magnetic solid and a magnetic solid. That's way too large in the RPA. So this term corrects for this error. So it improves cohesive energies. Here are errors for small molecules or large molecules. And the atomization energies are much improved by including that term here. So it stabilizes, in other words, the non-magnetic solution. And the last term is the so-called Singles contribution. And I've already talked about this term as well. This term is actually the way the orbitals and weight-function changes when you switch from the DFT to the many-body solution. This was the Singles contribution we called that I talked about. It was the change of the orbitals as we switch from DFT to the Artifog Hamiltonian. Quite some slides on this. And this contribution turns out to be pretty irrelevant in most cases, except in van der Waals bonded systems. What does it do? Actually, Artifog contracts the orbitals. So DFT has usually somewhat two large orbitals. Why? Anyone has a clue? DFT doesn't have to write many-body potential. So if you actually move an electron away from the nucleus, it should experience exactly one-over-r potential. If one electron is moved away, it should experience a one-over-r potential because that's kind of the Coulomb interaction of the core. And it doesn't feel. But the problem is, in DFT, you have self-interaction. So the electron doesn't see one-over-r potential as you move it away from the core. So if you have a nucleus and you move an electron away, it should exactly experience a one-over-r potential. So if you move here and you move your electron away, you should get exactly one-over-r potential. And no DFT function, including gradient-corrected functions, does this. In Artifog, you have that description correctly. Now, in DFT, the potential is different. So it's not sufficiently attractive. I don't know precisely how it looks. So it's kind of exponential. And then it approaches the right solution. But in this region, it's exponential. So the basin is not deep enough. That means the charge is too spread out here. The orbitals are too spread out. And this here, the signal is actually correct for this mistake. They kind of contract the charge density. So these three contributions, we are going to evaluate. And essentially, this is what I will talk about in the next half hour or so. RPA is very easy to do. Well, one thing is important here. You need to evaluate the RPA on top of DFT functions. So we use always, in our case, we always use the PVE function, which is the standard function for solids. So we do first the PVE calculation, and then on top, we calculate the RPA. Now, people will say, why do you do this? Well, you have to evaluate that. That's clear. And it's, again, a compromise. It's not going to be more precise than a couple of last. It's certainly not going to be as precise as full CI. But it might give us a reasonable answer for a wide class of systems. So that's why we want to do that. So RPA is actually nowadays very fast to do. The standard implementation scales like the force power. And in our code, we have now a new implementation, which scales only with the third power of system size. And that's exactly the same scaling you have for density function in series. The prefactors are larger. Probably a factor 10 to 100. But the compute time is typically only one order, sometimes two order of magnitude larger than for a TFT calculation. And that's nice. So I have a lot of results for RPA. So I have a lot of results that only include this diagram. I've told you before, it overcorrelates massively. By the way, this will also never do bond dissociation. This is a method for weakly correlated systems. You will typically, people love to show tests for bond dissociation. And I can tell them in advance, RPA will not do that. I will do a lot of good things for solids otherwise. The second contribution I will leave out. Singles, these are these terms here. These are the kind of charge contraction that you have when you go from the TFT description to a more correct many-body description. We just finished our implementation, standard implementation is N to the force. But you can do the singles also in N to the third again. So same scaling as TFT. It's more expensive than the standard RPA already, like a factor four. But anyway, we can do this now in N to the third, at least for total energies. The last guy is really a culprit, these cross diagrams. Otherwise, if you have something like that, where you cross the lines, you're in deep trouble in terms of computation. So it's much, much more expensive. So it really scales currently like N to the fifth in any code I know of, including our own. And that means it's really tough to do this term. That means I have very few, if any results for this additional contribution. So typically, I will show you results for the RPA. And then sometimes I will show you results, including singles. So how does RPA work? I give you some feeling. And I think this will also, it's now also possible in other major TFT codes, like CP2K. I don't know about the status in quantum espresso. I think they have an implementation, but it's not really yet broadly available and used. So how does it work? You start the TFT calculation with your favorite function. And essentially the trick of the RPA is to calculate then the polarizability. This is this term here. This is standard frequency dependent, time dependent perturbation theory. And once you have calculated this polarizability, you plug it into equation that was already written down by Newsy and Pines. And you calculate the correlation energy. And you get an estimate for the correlation energy. And you add that to the half before energy evaluated with the TFT orbital. And depending on your level of approximation, you then also calculate the singers. And sometimes also the second order screen exchange energy. So it's pretty forward to do these calculations in BASP. Currently it's still a three step procedure, but with the next release, it will be just a single button press. So everyone can use it. And it will be very simple to do it. So you calculate your TFT ground state with standard flag. Then because this equation involves also the unoccupied orbitals, you need to calculate all orbitals, including the unoccupied ones. You do that by one exact diagonalization that in our case algorithm exact. It's just exactly diagonalizing the TFT Hamiltonian to get unoccupied as well as occupied orbitals. Then you calculate the RTFoc energy and then you calculate the RP energy. That's documented in the manual. It's quite straightforward to do that. Now I told you before that this is not the best way to do it. If you use this equation here, it's pretty unpleasant because you have a sum of unoccupied states, a sum of unoccupied states. Sorry, sum of unoccupied states. This is the computational complexity resulting from that. And then you have here position index R and another position index R prime that gives a factor of number of grid points squared. So this scales like N to the fourth. And there's a far more elegant way to calculate the polarizability and that idea was already presented by Rex Gottby in 1995. But the idea really is, I mean, you have already heard now a lot about many-body perturbations here. Essentially, the view is very simple. You come in with a photon. The photon excites an electron whole pair. That's exactly our polarizability. And then this electron here, the whole propagates with this factor and here the electron propagates with this factor. Now here I have dropped the imaginary, the I here. That one has dropped because we now work in imaginary frequency and respectively in imaginary time. So we have a factor instead of a factor E to the I epsilon AT. We actually use in imaginary time T is equal I tau. This is called by the way the quotation but I'm not going into details. So you can reformulate all these many-body theories also in imaginary frequency in imaginary time. It's called weak rotation. So if you insert this up here, you see immediately that you get a factor E to the minus epsilon A tau. That's exactly what you do here. So essentially in the RPA equation that you have here, you actually invoke the polarizability in imaginary frequency. The advantage of this weak rotation is that your quantities are fast moves in the frequency regime. You don't have all these spikes. They're far easier to handle therefore. So if you Fourier transform this guy, you get exactly this equation. This is just a standard Fourier transformation. You can look it up in a mathematical book how to Fourier transform this coefficient here and you see that you get this equation here. This is exactly what I told you. So you actually have here this guy here is projection on to state A. Remember, I introduced a way to use these twins function, projection on to state I, propagation of the state now in imaginary time, and putting the state back in at position R prime. Projection on to state I, this is here. You project on to state I, you propagate the state with this energy, and you put it back in your state at position R prime. So it's nothing new we have seen these kind of things before, and this is exactly the polarizability in imaginary time. Well, if you actually put an I here, it would be the polarizability in real time. So this is the polarizability, and here's the trick. You can also write this, and this has already been written, for instance, by last set in. You can also write this as a point-wise product. So here R and R prime. Here R and R at the time I tau and the time minus I tau. So you can actually use your green function here if you store this quantity. If you store the green function for the occupied state as well as for the unoccupied states kind of the advanced and the retarded green function, if you store this in your computer, this here can be done in a computational complexity n squared, number of grid times squared. Previously, we had number of grid times squared times the number of virtual and times the number of occupied orbitals. But we can do this here as the number of grid times times the number of grid points, much, much simpler. So for each R, and for each R prime, we simply multiply two numbers at any time point that we need to consider, much, much faster. Actually, the green function, the calculation of the green's function here, this is a little bit more time consuming, but it's k is like n cubed. So this guy here, you have to evaluate the orbital at position R and the position R prime. So you store these two vectors, and then you need a matrix by matrix multiply has a d gem called to evaluate essentially this is a cubic scaling part. Actually, it scales like the number of occupied plus the number of virtual orbitals times the number of grid points. But we have actually reduced from number of occupied times number of virtual orbitals and to the cube to this here, number of occupied plus number of virtual orbitals. So instead of a multiplication sign, here's plus sign times the number of grid times squared. That step is actually now the calculation of the green function. So we store now inside the code to green function. That is the trick. Actually, how does it scale? It has other advantages. It also scales linearly in the number of k points, as does DFT. Hybrid functions scale cubically, no quadratically in the number of k points. This here scales linearly in the number of k points, as does DFT. It scales cubically in system size. What you see here is the number of k points. Well, you don't see that it is perfectly linear scaling. The reason is that we have to use more cores when we had very many k points. That's just because the code needs a lot of memory. So we have a sublinear scaling in the number of k points. And then it goes up, essentially, the reason being that we have not a super parallel efficiency yet. But essentially, you see that it's roughly scaling linearly with the number of k points. And if you look carefully, it really scales cubically with system size. Prefactors are much larger than DFT, of course. But calculations for 200 atoms for the RP now take less than 1 hour and 128 cores. So you really need parallel computers because storing this object, the Green's function, is really taking a lot of memory internally. But if you can have large memory machines or large cores, it's something that is computationally feasible. So this is just to give you an idea where we are going. How good is now the RP? And this I will go through very, very quickly. When you have the slides, you can read up the papers yourself. First disappointing result, RPA is not so great for atomization energies. I've told you before, RPA over correlates. And it over correlates the atom, the spin polarized atom, more strongly than the solid. That is the reason why you get two weak cohesive energies. So these are atomization energies where we actually atomize the solid. But then our atoms are spin polarized. The atom is usually having a spin polarization. And that energy is just too low in the RP. That's the reason why we don't get nice results for the atomization energies. What is shown here is the DFT atomization energies for the standard function that most people in solid physics use. That's the blue bar, where it's not so great either, but it's quite nice. Now, if you look, Hartree-Fock is dreadful. It gives you dreadful atomization energies and the RP, Hartree-Fock plus RP. I've told you before, you need to calculate the Hartree-Fock energy as well. This is this step here. Then you add the RP correlation energy. Then indeed, you get reasonable, but not great result. So hardly an improvement. But this is my favorite and far more interesting result. What happens if you take the lithium metal combined with the fluid and molecule? This is the thing I showed you at the very beginning today. And that was not great with DFT. It was actually dreadful. And RP definitely improved up on that quantity, quite sizable. So this is really a massive improvement. And mine, we have here the CO-Altz option energy on rhodium. And that also is now in very nice agreement with experiment. Latvist constants. This is the standard functional PBE that most people use for solids. And this has a common problem that if you move up in the masses, so if your ions, or if the atoms become heavier, the lattice constant increases. The standard problem of the PBE function and there's virtually no function. It fuels that entirely. No DFT function. The RPA gives you more or less massive improvement in the lattice constant with arrows about 0.5%. I should say there are some functions that do better, hybrid functions like HSE, so that we suggested ourselves. But it's kind of a tweak. It almost feels unpleasant. There is a function that does the job right. But that function will not universally work. So this here seems to work quite universally. This is one of my favorite slides. It's actually the lattice constants of the transition metal series. This is RPA, the red line. The other thing is a copy. So again, the red line is the RPA. And as you can see, this is a very difficult series to reproduce. We set out with the alkali metals, move over the alkali earth metals, and then we finish off with the coinage metals, copper, silver, and gold. And this is really, really difficult to get with DFT. And we tried many functions. This is the PBE function that is shown here in blue. And this just doesn't do a great job. In particular, it doesn't get the alkali metals, right? And it doesn't get the coinage metals, right? That's because there's van der Waal's interaction between the T-shells. In gold, for instance, in gold, look at the error in the lattice in the volume. It's 6%. In gold, actually, the five T-shells is very polarizable. So you have van der Waal's interaction between the five T-shells. These are closed shells. And there is van der Waal's interactions between the T-shells. By the way, somebody came up with an example. Mercury would have similar troubles. So if you have mercury, 2 plus, you should include van der Waal's corrections. So switch it on. Who was it? Go. Yes. Switch on van der Waal's corrections. Green mays, OK. So this is actually polarizability related to the neglect of the polarizability. And this guy here is also polarizability of the core states. It's polarizability of, I think, five P and five S states. And of course, RPA does that, as you will see in a minute. Now there are van der Waal's corrected functions, like opt, pbe, 88. These are van der Waal's functions that include van der Waal's contributions, but are still strictly DFT. Doesn't have to do a great job either. I've told you this RPA includes van der Waal's interaction. That's because it includes this diagram here. RPA does include that diagram here. So electron-hole pair, electron-hole pair. That's included. And that's why you see actually in argon, trypton, you see the proper. This tail here is the RPA. So these are the RPA results. And this tail here really behaves like van der Waal to the 6th. You can analytically look at it. It's van der Waal to the 6th. You can fit it. It's right. It's the right behavior. So RPA does that right. Here, the signals are important. We use DFT orbitals to start with. I've told you, signals contract the density. And if you don't include that change, you're not getting nice results. So we need to include the signals. Just look at the argon results. This is just straight RPA. Two large lattice constant don't compare to experiment. This is with the signals included, great result. This here is the cohesive energy. Again, look at the argon. This is under bound because the charge is too spread out here in the DFT. And if you allow the charge density to contract, that means of the signals you get spot on result. That also works for krypton. For neon, it goes in the right direction but overcompensates somewhat. So neon is a problem. And I think I know why, but I will not discuss it. So we did graphite versus diamond. What you see here is the diamond RPA. And these are the graphite RPA results. And for graphite, you have a layer of material. The fund of arts interaction yields a 1 over T to the force behavior roughly at short distances, at least. And indeed, you can draw your correlation energy versus the distance to the force. This is the distance to the force. You get the right behavior. Graphene on nickel also seems to work, but there are no references. Ice. So we looked at different ice faces, low pressure faces, and high pressure faces. So here in this graph, I feel low pressure faces at ambient conditions. And these are high pressure faces. If you compress ice, there are structural changes to much more complex structures like those here. Sean, essentially the interesting thing is here the energy difference to the ground state. So this is the energy as you compress it. This cost of obviously energy. So the RPA results are here. These are the RPA results, the green lines. And these are diffusion. No, this is experiment. And you see immediately that the RPA is pretty much on top of the experimental values. There are some diffusion Monte Carlo results, namely here and here. Diffusion Monte Carlo is a very high level result. It's not the Al Abio method. It's actually a real space method. But you can see that this is also pretty much on top of our RPA results. Here again, the singles are important. These are the ones that contract the density. So if you want to get quantitative agreement, for instance, for the energy, 620 milli-electron volt, you need to include the singles. Because straight RPA is the same as in argon. Let these constants are again great. Somewhat overestimated if you don't include the singles. So here we have done a larger system. It's actually the energetics of an additional silicon atom in silicon. So you add an additional atom in silicon. And then what happens, the most stable configuration is the so-called dumbbell configuration, where actually the additional silicon atom kicks out another silicon atom and goes into a symmetric side at this defect here. You can also place the additional silicon atom in a hexagonal hole or in a tetrahedral site. This is here. Or you can create a silicon vacancy. Again, it's one case we can't do copper cluster right now. We would love to do copper cluster, but it's just not possible. So the best we can look at is diffusion Monte Carlo again. This is done for 16 atoms, ridiculously small cells. So we don't know about the error bars of these calculations. RPA has been done for, I think, up to 200 atoms. And you can see, even the very tiny unit cell size that diffusion Monte Carlo people have used, I'm pretty satisfied with that kind of agreement. Now, I told you before, DFT can give you the right answer. Here is PBE. It's really not great. PBE is 1 electron volt or all. Now, you can switch to a hybrid functional, HSE. And that seems to be pretty good agreement with those data. Or you can do van der Waals corrections, which on your van der Waals correction, in this case, this is a work of Tachenko Schaeffler. So these are Tachenko Schaeffler van der Waals interaction. What you see here, for instance, in those data is that they don't have any corrugation. So the energy is almost the same for any of these sites. I think this is poor shit. Isander is not here, right? He wouldn't like my statement. OK, so I take it back. It's actually, I find it suspicious. So there is no corrugation in the energy surface. And now you can choose what is the right result. Is this correct? HSE is PBE correct? Is HSE in van der Waals result? It seems like we have a lot of corrugation. And our energies, more or less, follow closely the HSE results. But are, in better agreement with the diffusion of Monte Carlo, actually, our data, like 200 milliolectron volts smaller than the diffusion Monte Carlo result. So I think they are best and closest to the diffusion Monte Carlo. This, I think, I don't believe it. It's typically a case where van der Waals interactions or corrections have a problem. And the problem stems from the fact that you have here very short bonds. And this is exactly where these van der Waals contributions can mess up. Looks great. So this is really finalizing what I wanted to tell you. Of course, we want to do full CI. Everyone wants to do full CI or super accurate DMRG algorithms. But these methods are limited. They can do at most 50 electrons in 100 orbitos. And with that, you can do no modeling of solids. Just forget it. It's impossible to do any serious material science and condensed metaphysics. That's why DFT was so successful. But DFTs, I would say since 25 years, there has been no super great advance in DFT methods. Well, codes have become better. Codes have become much faster. More reliable, more materials properties can be predicted. I don't want to be misunderstood. If you want to do materials to DFT, 90% of the time. I mean, that's what we do. Well, not my group right now, but we do DFT 90% of the time. We pick the functional such that we believe it should be working. But we want to have a double way of checking our results. And right now, we feel that DFTs is a little bit better than DFTs, not greatly better. It at least gives you a second opinion. If nothing else, it gives you a second view on the problem. And it's kind of based on many body perturbations here. I'll buy a very simple one. That's quite obvious. I mean, we have neglected in so many diagrams that we cannot be sure where it works. But it's worth while doing it. It's coming now very fast in BASP. Forces last Friday by PhD just finished forces for RP. And you will have forces, yet only for non-conservant potentials. I must say, but soon for the PW method. That means you can relax with the RP. It's really a great step forward. So full CI is really limited. These are the methods to watch out. If you ask me what is an exciting field where we can expect developments in the next years, this here is really something I would do. Couple cluster. For sorry, there's still so much to do. We have done some calculations for tiny systems. But there's little hope to take these methods directly to larger systems. So one has to tweak a lot of knobs. So I know that Andreas really looks for bright students that actually contribute to his development. So he's now at the Max Blanken Stuttgart. He is kind of a junior research professorship. And he will be staying there for at least four years. So Andreas Grüner has really developed these methods. And I think he's looking for very bright students that have an idea about quantum chemistry methods or quantum field theoretical methods and a neck for computations and implementations. I think this is really a moving field. It's a fast moving field. And I say a field where a lot of people can contribute. So if you feel up to these challenges, I think this is really something where we can move something. I don't think DFT is kind of routine now. Everyone can do DFT calculations. So I wouldn't do that. Model Hamiltonians are beautiful, but detached from the real world, if you ask me. So this is something moving and doing up-in-issue descriptions for materials is really going to be one of the fields that is going to be exciting and thriving in the next years to come. And we need really people to work on the code. So you should have knowledge in programming, as well as in quantum field theory, ideally, when you start this kind of work. So what are the ideas that we are exploring to make, in particular, a couple caster faster? There are many ideas. I told you before that the number of virtual orbitals enters like the force power. And to cope with that, one can introduce explicitly correlated methods that actually kind of account for the cusp condition. Cusp condition causes this small, slow convergence. That's this. If two electrons approach each other, there's a strong coulomb repulsion. And that causes still a slow basis at convergence. If two electrons approach each other, one over r singularity in the coulomb potential in that causes an abrupt change of the wave function. So this is something we are working on. And there are many ideas that have been already implemented in the quantum chemistry codes, not in our plane wave code. Of course, you can rely also on the locality principle. Correlation is, indeed, very local. And there's a lot of evidence. The RPA itself is actually non-local. It's actually far more long range than DFT. So the RPA itself, and you see that here, yields this one over r to the sixth wave. So even if densities are not overlapping, you have this behavior. Two quantum harmonic oscillators that are in contact via the coulomb potential will yield the van der Waal's interaction, even if there is no overlap whatsoever in their densities. Yeah? This is quantum harmonic oscillators, two oscillators with two densities that are not overlapping, still one over r to the sixth. So this kind of behavior is there, and it's already captured by these diagrams. Fortunately, these diagrams can be done very quickly. So these diagrams are easy to evaluate. And the other correlation effects are far shorter range. Actually, the unpleasant diagrams I was talking about, those that are crossed. So those diagrams that have crossings here, they are very short-ranged. So if there is a crossing here, they are more difficult to evaluate. The complexity is higher, but there's a lot of evidence that these are short-ranged, and one can do them much faster relying on locality principles. So these are the things that we're exploiting. And people, again, in the quantum chemistry community have done this, and maybe the most famous one, Vienna, with Morpo, I think Morpo, or Frank Nese recently with Orca. So he's able to do coupled cluster calculations for 200 atoms, or 300 atoms. And this is a free code. You can download it. So if you do molecules, I think it's worthwhile trying that code. Coupled cluster, CCST parenthesis T for 300 atoms. And that's amazing, in very, very little compute time. So I feel if we don't do something in the solid state community to kind of compete against those guys, we're out of the game, at least for accurate predictions. So RTA is currently kind of, it's not great, but it's kind of, I hope, often better than DFT. It's ready for prime time use. So in our code, we will have forces very soon. Actually, it's already working. We did the first tests, and it's pretty fast. And it's also fast for real systems. So it takes two hours and 200 cores. It's nothing. I mean, you can really afford these kind of methods nowadays. OK. This here is my group, actually. This is also the group of Cesare Franchini, who is also an assistant, an associated professor now in Vienna. So it's kind of our joint group. I need to thank the group for the work. And I'm not finished. This is just this talk finished yet. So you don't need to clap now. But yes, you have to clap for my group who did all the great work here. And of course, the YCOM is a FWF funded project where we develop these kind of methods. OK. So last, not least. And I will try to be, this was the wrong talk. I will try to be very brief on the last part, excited states. So at this point, I've really talked only about ground state properties, right? Total energies are strictly ground state properties. Now, what about excited state properties? And I will not give you the typical GW talk. I will do this very differently, because I'm not a big fan of these GW equations. I think they are nice. One should understand it. But I think, who knows GW equations before I start to dwell into this? Who has heard about GW? OK. Then I will only, for the experts, sometimes, or something. And of course, GW is a fantastic method. But I'm not so happy with that. I mean, I think the original equations that come from quantum antibody theory from the quantum field theoretic equations that I've kind of discussed in the first two hours, Wick's theorem and so on, they are as concise and as powerful as the GW equations. I have often the impression people only read the GW papers and then they finish. They don't look into the original derivation. So I will give you a different idea about this. So what is the motivation for this? One of the real big issues with TFT is certainly the Banquet-Barrows, which are huge. And this has actually serious consequences, even for ground state predictions. One being that if you predict defect properties and your banquets are much too small, you have really hard time to get good energetics. So essentially, the electronic properties are really strongly intertwined with defects. I mean, there's not one. And the other thing, these are really related quantities that you need to get right. Optical properties, of course, you all know that if you do standard TFT, you don't get great absorption spectra. So you need to go beyond that. I will not. I don't have time to talk about optical properties here. Again, this is really the random phase approximation because that's used most often nowadays. So it's the same theme as we just had in the previous talk. So essentially, we are now talking about the self-energy. But you will see that exactly the same approximations that are now used for the total energy that now people start to use for the total energy that is up here can be also used for the self-energy. That's exactly, actually, this one was done before. This is exactly what GW is about. So GW is really the random phase approximation, now not for the correlation energy, but for the self-energy. So I will come back to the quasi-particle equations and the Green's function. Talk a little bit about Feynman diagrams. But actually, for the purpose of this talk, you don't need to distinguish between Goldstone and Feynman diagrams. Yeah, I will try to convince you that a lot of things are missing in the random phase approximation, and we'll show you how well the GW works nowadays. GW is now a routine thing. There are many codes out there that can do GW. To name a few, it's Jumbo of Andrea Marini. Lutier Reining has a code. Of course, up in it can do GW. Quantum Espresso can do GW. VASP can do GW. So practically, it's now a routine thing. So I want to give you a flavor of what are the errors that you have in the GW. And of course, this will be related to the things I've already told you. So what is the RPA really about? Yes, what we capture in this case is actually screen text change. And from what I've told you before, it should be clear this is never ever going to work for strongly correlated systems. The RPA is such a pedestrian approximation that it is the best kind of working approximation that will work for weakly correlated or as people nowadays call it medium correlated systems. I would rather say weakly correlated systems and not medium correlated systems. So the first idea of what you do in this GW is kind of given here. So this is the conchem equation, kinetic energy term. The potential caused by the ions and now previously this was N e nuclear electron. The heart rate term, this is the electrostatic potential of the electrons. And then this exchange correlation potential which is local in the local density approximation and actually any DFT, it's a local potential. Now typically you solve this equation and calculate the eigenvalues and then in the most naive way you just compare the eigenvalue step you obtain here with measured spectroscopic quantities. You do this, you look at the density of states and then you compare often this experiment. That's kind of a very good approximation, gives you this strongly underestimated thank you. So now the object of desire is this here, the energy. This is the more sophisticated, well not really, more sophisticated approximation kind of is the Artifox theory where you have this exchange potential. And I already discussed this, I showed you the diagrams, what this corresponds to. This is also briefly touched upon again in this talk. And now the most common approach that is most widely adopted if you want to do quick calculations is to use hybrid functions. So you actually switch off a little bit of this exchange or you do 70% of the DFT exchange and then you add like 25% of the exact exchange. Maybe I can touch upon why this is a good approximation towards the end of the talk. So actually you kind of take and hybrid a mixture between those two approaches and that seems to massively improve the bandcaps for many materials. So it's completely kind of as a, yeah, how do you say, accidental most likely, but it's not accidental, there's a good reason for it. Now the proper equation that you need to solve is really this kind of quasi-particle equation. So the object that comes in here here, you have a local potential, the exchange potential is non-local, maybe I can write it down just so that you recall what this exchange potential is. It's the density matrix gamma of rr prime divided by r minus r prime. Or diagrammatically it's, but this comes up again. It's this class of diagrams here. So the proper equation is actually and I will not derive this equation. I don't have time to do this, that you have actually, it's funny because you never derive this equation. This comes from the equation of motion essentially, when principally two-particle rings function enter. So it's kind of, you just introduce this object, which is called the self energy. And this object is not only non-local, but it's also energy dependent. I will try to give you some idea what this equation means in terms of physics. I will not derive it, but try to give you an impression what it does. So let's go back to our Green's function. The single-particle Green's function, I've never really defined this, is the resolvent of a Hamiltonian. So if you have a Hamiltonian, so in this case, this is a energy independent Hamiltonian. It's a classical one-particle Hamiltonian. This here is also one-particle Hamiltonian. So it's one body potential, so we don't have any dependence on r1, r2, r3. So it's a one-particle Hamiltonian. So the Green's function is, the one-particle Green's function to be precise is defined as the resolvent of this Hamiltonian. What's the resolvent? Essentially, it's, well, it's g to the minus one is equal w, this is the frequency minus h. Or if you take g, g is defined as the frequency, that's kind of the frequency of the particle coming in, minus h to the power of minus one. So you can bring this minus one to the other side as well. Okay, what is this useful for? Here, I've written this in a Lehmann presentation using a complete basis set, using the basis set of one electron states of this Hamiltonian here. In this way, you can write the Hamiltonian or the frequency minus the Hamiltonian. This is just a sum of all states. These are the one electron energies of the Hamiltonian. These are the eigenvectors of the Hamiltonian. And since they form a complete basis set, you can expand the delta function also in this complete basis set. So this is essentially just a kind of equivalent representation of the left side. So these states are the eigenstates of this Hamiltonian. These energies are the energies corresponding to the corresponding eigenstates. Now, in this case, the single particle Green's function is extremely simple to write down. You essentially invert this guy here. So you just invert it essentially in the spectral representation. So this guy moves down into the denominator. It's easy to see that this g times h fulfills this equation, omega minus Hamiltonian times the Green's function fulfills this equation. So the Green's function, we have come across this Green's function now often. I've done this always in the time, right? This is the same Green's function as we had before. And the key point here is that in this case, perturbation theory works along the following lines. You start from a one electron Hamiltonian and then add many body effects. So we start from this one particle Green's function and then add many body effects. And it turns out that you can actually use Feynman diagrams to construct the self energy. So this object here, this object here is itself a functional of the Green's function. So this is the main point in many body perturbation theory that we can actually write. We can write in many body perturbation theory, sigma, the self energy as a functional of the Green's function, of the independent part of the Green's function. And of course these Coulomb integrals we had before and possibly the effective one electron potential that we had before will be also involved. That's the trick. So just to give you an idea how this works, I will first briefly discuss the Dyson equation. So assuming that we can describe the propagation of particles by this equation or by this kind of frequency, it's not really any more Hamiltonian, that's why I put a quote here. So here we have now the self energy, this more complex object that I introduced here on a kind of pragmatic basis. The self energy now also depends on R, R prime, and on the frequency or method. And walk through the algebra, then you will see that there is a relation between the Green's function for this object and the original Green's function for a purely simple, non-frequency dependent Hamiltonian. It's essentially given by this equation. So this is our unperturbed Hamiltonian now, for the time being, we only include the ionic potential, we can add the energy in the outer term. This is our unperturbed Hamiltonian, H zero. This here is the term that we add, this is the exchange correlation, but this is, I shouldn't write H, but I've used the symbol H because it's not an Hamiltonian, it's obviously frequency dependent. Then you can show that in this case, the Green's function is given by this kind of equation, which is the Soapel Dyson equation. So the Green's function is given by G zero. This is the original propagator, the original Green's function plus G zero. Self-energy and the interacting Green's function. So, kind of where is this? Yes, here. So the Green's function is exactly the same object we had before. It's nothing new. So you can Fourier transform this to time, this one particle Green's function, and then you get two terms. Well, note here, there's a shift of the pulse, but essentially you get two terms. Particle propagator and the whole propagator. And this is again a slide we already have. So if you Fourier transform this guy to time, nothing more than this, you get this representation for the Green's function. And this one you have already seen. So this independent particle Green's function are projection onto this previously unoccupied state that was unoccupied in the ground state for H zero. Then we have the propagation. You would have in the Schrodinger picture, E to the I epsilon A. The time difference is the phase factor that enters, and then we pop back the orbital A at the second position, R2. And this is the whole propagator, it's exactly the same object we had before. So, what I'm saying is that this definition of the Green's function that I've given you here is completely consistent with what we had yet before when I talked about Green's function. So here we propagate on a previously occupied state in the ground state, then we propagate the whole using the Schrodinger factor and pop in the whole. So, it's needless to say that you can actually derive the self energy exactly by drawing in this case all diagrams that have one incoming line and one outgoing line. Very similar to what we had before. The only difference now is that you need to have one open incoming line and one open outgoing line. So, if you draw all possible, in this case, Feynman diagrams that have one incoming line and one outgoing line, you have the same ingredients to play with. You have, again, Coulomb interaction, particle whole pairs. So, any vertex needs to have one incoming and one outgoing line. So, now by construct, if you just literally construct all possible permutations, of course, this is a pretty hard combinatorial problem, so the number of diagrams rapidly grows untractable. But, if you draw all possible diagrams that you can come up with, you're essentially done. So, it's very similar to before. Before, the only difference to before is that the diagram needed to be closed to get an energy. Now, actually, the diagrams need to have one incoming and one outgoing line. And that's kind of, an electron comes in, then it is scattered by all the other processes or I will talk about what happens and then it comes back in. So, okay, let's look at these diagrams what they do need in terms of physics. What happens here is, an electron comes in. If the electron has enough energy, it can excite and give away some energy. Maybe I do this on the blackboard. So, an electron comes in. So, the nice thing about the diagrams is that they are easy to interpret. An electron comes in, then it actually transfers energy omega to create an electron whole pair. And later in time, this is re-absorbed. The point here is maybe that this self-energy is really the sum of all these diagrams. This is maybe not intuitive, but again, be assured, you can derive these theorems essentially by defining the propagator and then using an applying weak theorem and everything. So, this is a rigid field. I mean, this is a firm theory that is based on firm footing. So, you can actually go through the algebra and see that it works exactly like that. So, I draw anyway these pictures because they are intuitively easier to grab. So, an electron comes in and actually then transfers energy to an electron whole pair and then the electron whole pair propagates and is annihilated later and transfers its energy back. Well, then obviously the energy of this guy that comes in originally is back there. So, this guy that comes out has exactly the same energy as it had originally when it came in. This is actually Auger process. So, an electron comes in and creates an electron whole pair. Now, we have actually two electrons in one hole. This is so-called Auger process. It's exactly this process that you here encounter. If we consider time ordering, this is another process that can happen. It's a little bit more complicated. It's the same Feynman diagram, but in Goldstone diagrams, this would be a different diagram. What happens here is quite interesting. So, you have a vacuum fluctuation where you create two electron whole pairs, two electron whole pairs. It's a vacuum fluctuation. So, you have the ground state, then suddenly as a result of a vacuum fluctuation, we had this in the correlation energy. These were the MP2 diagrams. You create two electron whole pairs and then this incoming electron annihilates against this hole and the energy, here's an energy omega, is transferred back, annihilates against this hole and that's it. It's another process that is possible. So, in Feynman diagrams, both processes are included here. This is a second order process. So, the electron flies around and then it loses energy to the electron hole system. So, it excites an electron whole pair, actually, process. The two electron whole pairs actually propagate, annihilate and create another electron whole pair. We had exactly the same kind of diagrams just before an energy and then they are re-absorbed by the original electron. Third order process is also possible. Now, there are many more diagrams we can draw. This here is a tricky one. So, here the electron flies in, emits a photon, electron continues, emits another photon and then the photons are re-absorbed in a cross fashion. That's also possible. This is another simple diagram. You excite an electron whole pair and the electron whole pair actually interacts with each other. This is about excitons. If you have one electron in the hole and they travel around and suddenly they experience coulomb repulsion with a lot of coulomb attraction, they attract it to each other and then only then they kind of, so they emit a photon here and the other guy re-absorbs the photon. So, this means it changes its state and then here it re-absorbs and takes up the energy. These diagrams here are important because as we will see in a minute, they are not there in the RPA. The RPA, again, as before, is this very simple approximation where you only include the bubble diagrams. So, we include exactly the same diagrams we did before the correlation energy and we neglect all these diagrams which contain a lot of important physics. But that's what the GW approximation done. Okay, these two diagrams I have already had in my previous talk just to remind you what they mean. So, this here is an electron comes in, emits a photon and actually is scattered at the density and then it continues. Remember, this here is the Green's function. Exactly the same size, it's nothing new. Evaluated at equal time. This is the sum here for close diagrams goes only over occupied states. This exponential phase factor was zero. This gives a one here and it's just density. So, this term here is really nothing but the Hartree energy. So, this means particles are scattered by the Hartree potential. Essentially, this is what it means. And this term is exactly the Hartree term that you have usually in density function and theory in Hartree folk. This term here I've already drawn before so a particle comes in, emits a photon and re-adsobs the photon immediately. This is the exchange term. Again, this here is our Green's function or independent particle Green's function which I've written down here. It's just a copy from here. So, I've just taken copy the formula over here. Time is the same. Coulomb lines are always horizontal so there is no change in the time. So, this is zero, phase factor is one. This is the density matrix divided by the difference in the position R minus R2. So, these two simple diagrams, these are the lowest order. The first order terms are really Hartree and exchange. Now, in the GW approximation, you only account for those diagrams. Okay, here's the crux of the GW approximation. And I've told you this before so there's a lot of things I've already talked about. A good theory should anti-symmetrize all diagrams. At least that's my opinion. So, what does it mean? Actually, I will draw this again carefully. So, we have here an error moving out and coming back in. We have here the Coulomb line and here we have an electron coming in and continuing. Now, there's a second way to reconnect the diagrams and it's this one. So, it flows out there, comes back in, it comes back there and then you put in the Coulomb line like this. This is kind of anti-symmetrization because we deal with fermion. Also sketched here, yeah? So, it moves, the error can be connected differently. This error can be connected with that here and comes back in. So, if you anti-symmetrize the Hartree term, you get the exchange. Now, if you anti-symmetrize these RPA terms here, these terms here, you get actually this term here. So, if you anti-symmetrize this line here, so if you connect differently, the other way you can connect, connect this error with that one here, this error with that one here and draw here the Coulomb line. So, anti-symmetrization would require you to include that diagram here, which is not done in the GW approximation. This was the Sosex term that I was talking about before, kind of, not quite, but I mean it was related to the Sosex term. So, in principle, a good theory, good fermionic theory, should anti-symmetrize all Coulomb interactions. If you use the Vick theorem to derive the propagator, this automatically happens, because we have used this fermionic algebra, everything is taken care of, and you get all those diagrams, yeah? So, you immediately get any of those diagrams, all the diagrams that are needed to include. First order, these are all diagrams. In second order, these are also all diagrams. So, once we have done this, this is second order, you're done, there are no other second order diagrams. So, okay, half defox theory therefore essentially describes scattering at the electrostatic potential of the other electrons, and this here is the exchange interaction that has no classical analog. And what we do now in the RPA is that the exchange, this term here, is screened. So, the Hartree term doesn't change. There is nothing that actually changed the Hartree term, but this here, where you emit the photon and re-absorb the photon later, is replaced and actually, in fact, these processes are taking care in the counter as well. So, you have emission of a photon, and this photon creates a part of the whole pair as an intermediate guy, and this then later reabsorbed, first order. Second order, inclusion of that process and then inclusion of this. So, RPA essentially takes these diagrams into account. So, it changes the exchange to a screened exchange. So, GW can be also called screened exchange method, yeah? So, essentially, it doesn't do anything in the outfit. Again, this is very important to remember. It only changes the pair exchange interaction to a screened exchange interaction. That's the core of the GW approximation. The other core is that these Coulomb lines need to be horizontally. So, I've drawn this here kind of a little bit sloppily. I think I had it before, nicely drawn. So, this here is electron comes in. Coulomb interaction is instantaneous in this reabsorb. The other thing to consider is that this here is now time dependent. So, there can be a time difference between emission of the photon and reabsorption of the photon because you can create this electron whole pair that then propagate. So, all these theories are by the way non-relativistic in the sense that we have only instantaneous photons, yeah? Photons don't have any time dependence. Second order process, and there can be a time difference between this point and that point in time. So, there is clearly some frequency or time dependence in the interaction. So, there are two things that happen. We've replaced the bare Coulomb interaction V by a screen Coulomb interaction. And the second thing that happens is that this here has a, so the bare Coulomb interaction is instantaneous in time. So, it operates at the same time. I told you already that always Coulomb lines are instantaneous, but this can depend on the time difference. This is redotation effects. I don't know, maybe you heard a little bit about memory function techniques. No, probably not. Not here, at least. So, okay, diagrammatically, then you do usually draw this in the following way. You replace the bare Coulomb interaction by a double wiggly line. And this double wiggly line is essentially a resummation of all these diagrams. So, it's the bare Coulomb interaction, the first order process, the second order process. So, making the w time dependent means that if we can Fourier transform, we need to Fourier transform it to frequency space so it will be necessarily frequency dependent. If it's time dependent, it implies that you can Fourier transform this e to the i omega t1 minus t2 and get a frequency dependency. We integrate over, let's write this t1 minus t2. And that's obviously a frequency dependency. And so, this is the final GW equation, which Hedin derived, well, Hedin didn't derive it. Hedin writes in his abstract. It's just a recapitulation of existing literature. So, this is the bare exchange. This is the density matrix, gamma. Density matrix, orbit led R, orbit led R prime. This is essentially the density matrix. And this here is the bare Coulomb interaction working at equal times. Actually, you can insert here the Green's function into the frequency integration and you will see this is what comes up. This here is essentially GV, Green's function times the bare Coulomb interaction. In GW, you do only one thing. You replace the bare Coulomb interaction by a screen Coulomb interaction. And this here is our screen Coulomb interaction. So, you replace that by a screen Coulomb interaction. This becomes frequency dependent. So, the reason why it's called GW is because you now have the Green's function at frequency omega prime minus omega times the bare Coulomb interaction at frequency omega. So, this is our self energy. We approximate the self energy by this very compact approximation GW. Exchange, bare exchange is GV. This is GW. You could name it like this. This is, by the way, a real nasty thing that this Hedin called is GW approximation. So, many papers, if you write in your abstract GW, the editors will put, nowadays, it's a little bit better. They will put an error and say, please spell out your approximation. What does GW stand for? Which you can't do because it's Green's function times screen potential K. You can write in parentheses, Green's function times screen deformation. But then they are not happy either. So, you have to insist that this is okay to write GW. Okay, let's recap it. In, in, Party Fox theory, you have a bare exchange interaction between electrons and other electrons, or electrons and holes. And the very core of solid state physics is that the bare interaction will be screened by other electrons. So, we have a sea of electrons that screens the interaction. That's really what happens. So, essentially, this electron is kind of screened by all the other electrons. Okay, this is my recap. So, this screened interaction is nothing but you have a Coulomb line, a photon coming in. The photon excites an electron hole pair and emits the electron hole pair again. This is this first-order process. Second-order process, Coulomb line comes in, photon comes in, emits a, creates a little electron hole pair. This is the polarizability. We have walked through that before in the previous talk. This guy, this kind of bubble is the polarizability. Then you emit the photon again and then you can emit another electron hole pair and we absorb it and that's the next-order process. So, this is the bare Coulomb interaction. This here is the first-order process. Second-order process. And this is a geometrical sum, right? So, this here, this guy is what we always add. This is a geometrical sum. And this is really the physics behind this. There's no mystery. It's a geometrical sum and geometrical sums can be done. So, this is our geometrical sum and that's when we sum it up, the geometrical series. W is V plus, this is the bare Coulomb interaction, one plus chi, chi is the polarizability, V to the power of minus one. Or we can write it in three different ways and confusingly, we find any of these equations in literature. They are all equivalent. They are just primitive algebraic rearrangement. One of them is this one. W is V plus V chi V. This looks like a Dyson equation. We had for the electrons before. Here, G is equal to G naught plus G naught sigma G. And actually, this is very close related to the particle hole, to the polarization propagator. It's the same kind of thing only for both zones. Remember the Dyson equation for the Green's function. The Dyson equation for the Green's function I've written down that was actually G naught. G is equal to G naught, the independent particle case plus G naught sigma G. This is kind of electron propagation. Describes how your electrons propagate. This here describes how both zones propagate. Essentially, this is the equation W. It's the bare Coulomb interaction plus V chi V. So this is how both zones propagate in the presence of an electronic fermionic C. These are ultimately the same equations almost. Well, they are if algebraically the same stretch and they mean the same. This is the interacting propagator. This is the interacting bosonic propagator. So this is just from the previous thing here. Geometrically serious done and we are done. Okay, a few recaps here I want to make. This looks a little bit like gas physics and that's the way I presented this, but you can use Vick's theorem and everything we have discussed before to calculate those things properly and you get the same result. Okay, so this is might look like kind of physics but of course there's sound mathematics behind it. The weight here is different than in perturbation theory. This is the other thing to note. So the weight of this diagram is one, the weight of this diagram is one and the weight of this diagram is one. So previously we had also first order terms, second order terms and the other weight was one half, one third, so the weights are different but that's also what you get from Vick's theorem and all these proper manipulations. That sometimes it's confusing but you can actually make connections between both theories by taking derivatives and then you see that actually this goes away. But that's another thing. So now we are really ready to do practical calculations. Yeah, just remember that our new code can do the independent particle polarizability very efficiently. This is an equation we already had. This means now the time is turned around. So the time is propagating from here to here. So photon comes in, creates an electron whole pair and these propagators you can describe by the independent particle Green's functions. And this is the point-wise contraction was talking before. This is a way to accelerate the calculation of the independent particle polarizability. So we can use exactly the same great tricks we used before for the correlation energy to speed up the calculation. So we now have a GW code that scales in fact with the cubic power of the system size and linearly in the number of k points because all the tricks we've done in the RPA correlation energy can be here reapplied. All the ingredients are the same. It's really nothing mysterious about it. We have neglected a lot of important physics. In second-order effects like this here have been neglected. I've already told you about this process. Electron comes in, emits a photon, continues its propagation, emits another photon and then first the first guy is re-absorbed and the other one is re-absorbed. We have omitted these kind of diagrams. And there's no way to re-induce them but to really calculate those guys. These effects have been also neglected. So we create an electron hole pair, they propagate exchange energy via photon. This is what you need to describe X-ray. So we have an electron here and the hole here and the density of those two guys are interacting via the Coulomb. It's attractive actually, the Coulomb interaction. We have neglected this. This is usually what you do in the GW approximation. So these terms we have neglected and these terms we have neglected. Now, confusingly, there are a lot of names for what we have neglected. The GW community talks about something that is called the vertex correction. And the vertex correction is kind of, if you ever come across this, you might read that there is a vertex in the self-energy that needs to be care of. Actually, there are two vertex corrections. One is in the self-energy and one is in the polarizability. One takes care of these guys. That's the one in the self-energy and one takes care of that guy. So, GW band gaps. So GW is, as you will see in a minute, it's a very same theory we applied to the correlation energy. And the entire strategy is entirely the same. The only property we need, we need the screening properties of the material. We need this chi-zero. And the way we do this, so from where do we get the screening? The way we do this is we take DFT. In the very same manner we did the RPA before to get the correlation energy, we now use as a reference the density function and theory calculation. So we set out from the DFT calculation, we calculate our orbitals. This is our DFT equation. We calculate all orbitals. We again need occupied as well as unoccupied states. That's because the polarizability involves sums over occupied as well as over unoccupied states or alternatively the Green's function. We need to propagate holes in electrons so we invoke our unoccupied states. So we need, we do a DFT calculation. Then we exactly diagonalize the Hamiltonian to get all states occupied as well as unoccupied. Then in our code, well this depends a little bit. We can determine now the Green's function. From the Green's function we can determine the polarizability. GG is the polarizability. From GG we get the polarizability and from the polarizability we get W. W is equal V plus V. This is the independent particle polarizability. We solve this equation. We get the polarizability and essentially we are done. Now there's one more thing that one does usually. One uses perturbation theory to estimate the quasi-partic energies. So starting from DFT we now replace the exchange correlation potential by the self-energy. So we replace this exchange correlation potential by the more accurately determined self-energy. To build the self-energy we need all these ingredients. G times W and we get this guy. So to save computation it costs, this is usually solved kind of perturbatively. So we use first-order perturbation theory. Just first-order perturbation theory to estimate these quasi-particle energies. So actually this guy here, the zeros of this here, so where this to the left is equal to the right determines essentially where your quasi-particle energies where they are. Alternatively, this is an alternative formulation. You can also determine the poles of the Green's function. So this is the Green's function. G naught is the non-interacting Green's function. This is the non-interacting Green's function. This is our Dyson equation. And yes, in the G naught W naught you replace in the Dyson equation one. So your alternative by G zero and alternatively you can also determine the poles of this guy here which is exactly equivalent to determine the zeros of this guy here. You can walk through the algebra is very easy. So steps again, first DFT calculation you calculate the occupied unoccupied states then VASP calculates the polarizability then we calculate W from this polarizability and then we build GW, G times W and then we essentially done. There's a slightly more refined version. We can actually update the one electron energies in the Green's function and that approximation is commonly called GW naught. So we only iterate the poles in the Green's function. You don't update W, W is still from DFT but we iterate the poles in the Green's function. Well, this is again now somebody might ask why do we build it? This is just experience that this is slightly better. So you iterate the poles in the Green's function until you are self consistent. So what does it mean? You keep W fixed, W is from DFT but then you iterate this quantity here or rather the positions of the poles until you are self consistent. So you go back, now the chi zero remains the same, it's DFT. W remains the same, it's DFT. We only update this guy here, the yellow guy. We update the one electron energies in this guy here. Well, this is actually known as, well, let's forget this. This is actually essentially a slightly better, empirically slightly more accurate approach. So these are the two things one does G naught, W naught in a code like VASP you do actually three steps. You calculate the DFT ground state then you do an exact diagonalization to calculate all orbitals, including the unoccupied orbitals and then you just tell the code to do the GW calculation. You have to determine how many frequency points you use because here on the blackboard you see there is a dependency, the W or here you see it, the W depends on the frequency you have to choose how you discretize your grid. So you have to choose how many frequency points you use and that's put in by the, yeah the default I think is 50 but usually you tell him N omega the number of frequency points you want to use. So this step essentially gives you then the quasi-particle energies that you want. So there are many parameters like the number of bands that you need to control. I recommend just an exact diagonalization the number of frequency points, some cutoffs but in principle the calculations have now become I think very straightforward for most of the code. Okay, why do you do that? Well, I told you before you want to improve the band gaps. So the TFT band gaps are often dreadful. This is a fairly old slide so I haven't updated this in a long time but by and large it's still quite okay what I show here. So the band gaps for silicon gallium arsenide actually this I think is my updated slide. Let me check. Why the hell is this not updated? I thought I updated this recently. No, it's an old slide. Anyway, you see these are the TFT band gaps. This is a double logarithmic scale but you see zinc oxide should have a band gap experiment of 3.6 electron volt but you get something like 0.7 electron volt. It's really dreadful, DC. So if you do GW you get much improved description. Actually this is not quite up to date. Actually the current calculations are hitting something like 3 electron volt to pretty close experiment with GW. But by and large you immediately recognize this is a dramatic improvement and the arrows are typically down if you do a single shot, G naught, W naught. If you do a single shot, you get actually arrows in the band gap of only 8.5%. If you also iterate the greens function, generally the agreement is better. Good. Well, to confuse you, there are a lot of recipes to improve the precision. Hopefully improve the precision. Why is that so? Actually first we don't want to rely on the TFT starting point, right? We would like to have a scheme that doesn't rely on the input orbital on the input one electron energy, right? That's not nice that we depend on the input orbital. So there are recipes to get rid of these dependence and these are called quasi-particle GW schemes. They have been first pursued by Mark Shilfgaard and Kottany. Essentially it's a recipe how to update the one electron energies as well as the orbitals. They are much more expensive. I think in terms of practical applicability they are currently limited with most codes to something like eight atoms. I mean, with our new cubic scaling GW code, we can easily do 100 atoms for this kind of system. Anyway, it's quite simple again. So you start with TFT, you calculate all orbitals including the unoccupied orbitals and then you do a GW step and the only thing you change compared to the previous case is that you replace the GW by quasi-particle GW and the code takes care of it. So it's pretty straightforward to calculate these quasi-particle GWs. Is it necessary to do this? And that depends. The orbitals that you get from TFT are sometimes completely wrong, okay? And this is an example. This is barium titanate, black are the TFT. So this is the density of state, the joint density. No, this is the, these are the occupied states. This is the Fermi level. These are the unoccupied states. This is just the calculation for barium titanate. And the black line is the TFT density of states for the unoccupied states. If you now switch to GW, you see a slight change. But essentially what happens is that the band gap just opens. So the black line is just shifted to yield essentially the red line. So it's pretty much similar to the original TFT result except for a rigid gap correction. It's kind of a scissor correction where you just cut it off with a knife here, right? And then shift the black line over here. So there's really not a lot of things happening. And in this case, the self-consistent quasi-particle GW where you allow the orbitals to be self-consistently updated gives also very similar results. I would say qualitatively, this is more or less the same result. But there are cases where TFT is crazily wrong. This is, for instance, Lantanum. Lantanum is an F electron system. The first thing to note, so Lantanum illuminates this. This is the TFT result. Then you switch to the GW approximation. This is the red line. And the F states, these are F states here. The F states kind of are still fixed very close to the Fermi level. And that turns out to be completely wrong. If you now do self-consistent quasi-particle GW, the F states are shifting way about the conduction band onset. So here the orbitals that we have from TFT are just wrong and we need therefore to do a self-consistency on the orbitals and on the one electron energy. But by and large, this is not super important. You get only small quantitative changes. So here this is one case where we noted actually, if you ask an expert, we are back to this point, yeah? If you ask an expert, he will tell you Lantanum illuminate, no, no, this is going to be screwed by TFT. It's not drastically bad actually for the total energies, but clearly for the excitation spectrum, there's a very large difference between the red and the blue one. Here it's really very similar to the red and blue one. Here it's really substantially different because the F electrons are not nicely described by TFT. I skipped this. Actually, we did this quasi-particle GW carefully with basis set converged data. So we converged with respect to the basis set size. And the first time we used it, we were very disappointed because the gaps were actually far worse. But typically materials than the typical GW naught approximation or GW naught approximation, that was a big disappointment. It turns out if you put in the right quasi-particle energies into your green function, then your polarizabilities are wrong. Okay, this is here. So if you do a self-consistent, where we update the orbitals, if you do a fully self-consistent calculation where you update the one electron energies and the orbitals, these are your polarizabilities to static dielectric constant. This here is the experiment and you see that these are substantial errors. So your static screening is just way off. TFT with outside consistency does better. So this is a large error. And then we argued that in this case, you have to go beyond the GW approximation and include these effects, these are atomic binding energies between the electrons in the hole. So if you put in the right quasi-particle energies to start with, so if you have very good quasi-particle energies or very good bandcaps, you need to account for these excitonic effects. You need to account for the, where this is there obviously, I mean real physics, you should include this diagram here. So an electron propagates, interacts electrostatically with the hole, so you should include this interaction principle. We have just thrown it away and it turns out for TFT orbitals, this is accidentally allowed. So if you use TFT orbitals, you can safely disregard this diagram, but if you use something like a self-consistent quasi-particle energy, you need to include that. And that was our graph here that once we included the excitonic effect, the bandcaps were actually pretty precise. Actually they were probably as good as it gets with simple approximations. So I've told you before that GW is really a kind of stupid approximation. It only includes these bubble diagrams. It neglects a lot of things. One should include this here, this electron hole interactions. We have tested this back in 2007, right? Yes, 2007. And it turned out to be really important if you do self-consistency. And only recently we also looked at this class of diagrams. I already showed you that in the standard RTA, this is also not included. This is this cross diagram where an electron comes in, emits a photon, then another photon is emitted. The first one is reabsorbed first and the second one is reabsorbed later. So this second order is exactly the second order screen exchange I was talking about also on the level of the correlation energies. Because if you close this diagram here, so if you close it up the diagram here, it's exactly the same diagram we had before. I'm not going to draw this now because it takes too much time. But essentially this is the second order screen exchange which usually G double approximation takes nothing to account. Okay, in a ball back, what is important here? Well, if you ever come across a GW paper, be super careful what people have really done. Most people do DFT and then G naught, W naught. Sometimes they do GW naught. This neglects a lot of defects. It neglects this here, but that's okay. That only needs to be included if you become self-consistent. So if you do single shot GW naught or G naught, W naught, you're fairly safe that this can be neglected. That has to do with some kind of compensation. If you put in the DFT bandgaps and DFT orbitals, the neglect of this diagram is justified. Well, can you even show on paper that there is a cancellation between this diagram and the gap opening effect that you have in GW? No, you can show this. The G-arriving actually showed this before to me once, so I cannot give it to you. But it can be shown that when you open the gap and move towards the correct, if there's much to small gap, right? And then you neglect this diagram. If GW opens the gap, then you should include this diagram because it almost came through the gap opening. This is another diagram and this is inverse because it's a second order diagram that we have neglected from the outset. I've told you before that if you do the correlation energy, this diagram reduces the absolute correlation energy by 30%. RPA overcoilates, so it gives you 130% of the correlation energy instead of minus two electron volts, minus three electron volts. So this guy should massively reduce correlation. And actually, we did this only recently. We looked at this and we looked at this for the ionization potential in different materials. And what you see on this slide is here, we have included only this diagram. So the red line are the results where we have included only this diagram and then we have also included the vertex in the self-energy. The vertex in the self-energy is exactly the effect of this diagram. And what you can see here is these are the experimental polarizabilities, the black bars, ion, so these are the experimental ionization potentials and by and large, this term massively improves the ionization potential. So it's important for ionization energy. In particular, it's important for something like small molecules. So this term really shifts the ionization potential. It also shifts the electron affinities by roughly the same value. And therefore brings better agreement with experiment. It also improves the D binding energies, the binding energies of the D electrons. So this is the standard GW approximation where we have neglected this vertex collection. The blue one is then where the one where we have included the vertex collection. So I think I will skip this. Finishing off with one final slide here, that just highlights again how difficult it is to do calculations, first principle calculations for materials that with this many-body perturbation theory. Okay, what's this slide about? This is something we published only recently in 2014 and it was a little bit of a shock for us, but it comes back to something I already told you at the very beginning, the huge basis set dependence. So correlation energies converge very slowly with respect to the basis set. That's what I told you. Remember, I told you that the correlation energy converges always exactly like one over the basis set size. And this is actually something Steve Louis pointed out, already some time ago, that this causes very slow convergence of quasi-particle energies. In particular of quasi-particle energies of tightly bound D electrons. Well, he didn't look at this, but he looked at this for zinc oxide. And here's how the quasi-particles converge with respect to the inverse of the number of orbitals that you include in your calculation. These are plane wave orbitals, but this also applies to LCO type orbitals. And what you see here, this here is something like 100, 200 plane waves. And here we are approaching something like 2000 plane waves. What you see here is the quasi-particle binding energy predicted for the gallium 3D state. The red one is for the valence band maximum at the gamma point. This is for the conduction band maximum at X. And this is for the conduction band maximum at gamma. If you look in the literature, most calculations are done at this basis set size, something like 100 plane waves. Some are done at 200 plane waves per atom. So we are kind of here. And that's actually your basis set converged value. Or even that is not converged, you still need to extrapolate linearly. So with 200 plane waves, you are not any close to convergence. And that means if you read the energy W paper, you have to be very careful what people have actually used. Or if you use the method, you have to be super careful to understand that the basis set convergence is very slow. Ridiculously slow. That also applies to absolute correlation energies. But there it's often not so bad because you subtract typically correlation energies in a solid from correlation energies in an atom. And there is some cancellation to some extent. But for quasi-particle energies, this is not the case because you look at the correlation energies of the 3D electrons or of an S electron behave very differently. So GW is intriguingly difficult to converge. And that explains why you find such a wide variety of different results for different materials in literature. DFT is easy to converge. People have asked me, but DFT converges exponentially usually with the basis set type. So once your basis set is good enough, your DFT energies and your DFT1 electron energies converge exponentially with increasing basis set type. This is in particular true for plane waves. It's also to some extent true for LCO basis set. This is not so for a correlated method. And that includes any method that uses the RP, any method that uses MP2, any method that uses kappa kasta method. So this is really a big issue. Basis set convergence is hilariously slow. I think, again, this is related to the cost condition. I was alluding to half an hour or one hour ago. It's related to electrons expelling each other while the coulomb interaction, which has a singularity at R is equal R prime. So it's kind of that one over R minus R prime as a singularity. And that is exactly the origin of the slow basis set convergence. And probably for quasi-particle calculations, it's actually most dramatic. And unfortunately, there's one inverse issue that our potentials that we distributed even now, they have quite large errors because, well, you have to read this paper. I'm not going to discuss this. I've not talked for three hours and getting really, really tired, you can believe me. So I'm not going to discuss this. In this paper, we discussed that you have to be super careful in what potentials you use. And that applies to almost any other code as well. So GW seems to be like a black box. And I've shown you, you can do it easily. But you have to be very careful that you use appropriate potentials to describe these issues. So from a practitioner's point of view, GW is a powerful method to predict band gaps. It's an approximate method, however. So it has specifically two important diagrams missing. This second order diagram that you would encounter in second order perturbation series is usually not present in standard GW calculations. We can calculate this, but it's much, much more expensive. It scales like n to the fifth, at least. So it has the same computation as scaling as this SOSIX contribution I was eluding before in the correlation energy. So in practice, hardly anyone includes that correction. That's okay, because it doesn't matter so much for the band gap. It does influence, however, the position of the electronic states and of localized states. Anyway, most people neglect it. The other term, this is equally expensive, unfortunately, to calculate this term. It's the only term that pops up in third order perturbation series. So it shouldn't be as dramatic, but still it's very important in solids. This term is actually the excitonic interaction between the particle and the hole. This is usually neglected in GW. That's okay if you don't do GW self-consistency. So from a practitioner's point of view, you can most likely rely mostly on this approach, G0W0, that seems to be sensible for almost all materials, or GW0. This here also seems to be okay, but it's already so much more expensive in practice you hardly ever use it. So this here and that here on top of PBE seems to be right now from a practitioner's point of view, the best approach to predict band gaps in solids. There are some other recipes around. You can try to do it on top of the hybrid functionals, HSE, but I'm not a great fan of this. Actually, I suspect if you do a careful evaluation, it's less accurate than this one here on top of PBE. So it's a little bit of a kind of practice recipe how you do the calculation, and my favorite one is really this one. So what should you read if you want to get into this? I've already given you on the first three books that you can read. These are another three books you might want to read, so there will be a lot to read. Shweta Waletzka. The strange thing is there's a hism, a kind of hisma or a hism between the Green's function community and the quantum chemistry community, so there's not one book that will teach you all. That's really a problem because in reality, all this is the same, yes? At least I haven't come across these books yet. I think there is a very smart book by Friedrich and Bechstedt, but that's a Green's function book. In principle, he introduces all these algebra of second quantization, but then immediately goes towards Green's function. Many body Green's function approaches. Shweta Waletzka is a standard book, and you see these are different books than the one I recommended for quantum chemistry methods. The last one is really a funny book to read, a guide to Feynman diagrams in many body problems. This is a guide that came from many body series and then kind of thought about its application in solid state physics, and it's kind of funny. It's maybe not the most, it's quite better, gorgeous, but it's not very firm, so to say. It kind of uses a lot of gut feeling if you want to, which is okay. So it might be actually okay to read that. So these are books that really deal mostly with Green's function project, and probably there are many more, but I haven't read them. I haven't even read any of those, I must admit. Yes, I'm usually too lazy. And yes, with that, I really thank you for your attention, and we are ready to go.