 Okay, so thank you very much for the introduction. Thank you all for the invitation as well Let me say a quick quick few years where it's before I start Where I moved recently currently working at iMac iMac is World-leading R&D lab for nano electronics and digital technologies It's quite big. It's in Leuven at the university close to university It was a spin-off of this university 35 years ago and by now there's about 4,000 researchers working there Almost a hundred different nationalities. We have research groups affiliated the basically every Flemish University And then we have a lot of sites over Europe quite a few in the US and quite a few in Asia So basically we work together with all the big electronics companies and we build new technology For all the CPUs that sit in your laptops and in your Mobile phones also for the memory But that's not all that iMac does iMac has a full list of all kinds of things who work on photovoltaics power switches life sciences artificial intelligence But what we are mostly working on is the CMOS part basically the memory and CPUs that we use everywhere So all of that is coupled with a lot of infrastructure. We have some very advanced clean booms Where we bake wafers make new? samples and then there is a very large set of spectroscopy materials Very interesting is also the new etolab that we're building where we will be doing pump probe spectroscopy using up to 200 up to 124 electron volt beams 22 20 out of seconds to 200 picocent second pump probe type experiments So here i'm working now Most of the work i've been doing before but i'm going to talk about what's from before of this And basically we will be talking about gw right and we have the big advantage of These nice two talks just in front of me so you all understand exactly what is gw about There's a lot of question mark still right so i'll just keep keep the theory slides in there as well So let me start with with my little historical perspective of gw right So gw is quite old already 1964 he didn't wrote down the equations Then it took about 15 25 of 20 years before the first actual calculations were done First on some very small systems silicon diamond And then it took another 20 years for gw to come available in actual Codes that people could use there was a minute and then afterwards as the fast code And at that point also people started working on molecules finite size systems lower-dimensional systems and then Took another 10 years and then there was a paper that's called predictive gw calculations So apparently everything before this paper was not predicted so actually only like This much of a time later people start to wonder about What is the accuracy? What is the precision actually of gw? So? There's this famous Statement about the solid-state theory is that everything works for silicon This is basically true right I've made a lot of subtle potentials and you can make a subtle potential completely full of crap And it works for silicon you just get the right lattice constant for silicon same goes for for gw You can do gw any approximation any crappy Very good plus one pole model. Whatever doesn't matter. It will work for silicon, right? Which means not that everything that worked for silicon works for everything else, right? So if you go away from silicon everything breaks down Okay, so now at that point. We thought this is maybe a little bit of a problem. We started to do this gw 100 set At least for small systems. We did a big review and I will be talking a bit about this Hopefully answering also quite a few of the questions that we have seen Last of the lessons and now finally at this point. We arrived. It's almost being published paper where we do Systematic well, it's not a benchmark. It's not even precision and accuracy. It's just reproducibility Making sure that at least if we see three different codes We try to do exactly the same thing same basis at same sort of potential same ground state dft calculation same type of gw Do we actually manage to get the same results from? All of this is very important, right? If we want to go to high throughput We need to know that we are doing some right so this is What I would like to call the gw proper propaganda slide, right? So first time this this this picture was shown 1959 the world was perfect right gw solved all the problems in the world perfect agreement with experiments Then about ten years later the agreement with experiment wasn't that perfect anymore So there's a certain time dependence in gw apparently But obviously this this was not a problem. We just go partially self consistent And then the agreement with the experiment again is perfect But that was not the right kind of stuff because this is you actually need to do something like partial quasi-particle services, so complexity started to build up and build up and build up and and we have like a Comparison with experiments where only like handful of systems were looked at So that was basically prompting the question. So how precise are all these results? Right precise means the numerical precision of your calculation all the numerical Approximations that you make the finite sizes of the grids finite sizes of integration grids k points whatever happens Then there is a passion of self-consistency, right g not w not which starting point which level of self-consistency What kind of gw integration method we'll get to that right What to compare to Which base is set plane waves local orbitals we get the same What's better? What core valence partition? What kind of pseudo-potentials and this is not just a glitch right the list just goes on I didn't want to show all of it Maybe we don't care, right? We just converge and we just tune our parameters until we kind of hit the experimental value and say good results Maybe it's also just a very hard problem, right? It's only a few few years ago that we actually managed with a very large group of people to benchmark DFT And this is not a very complicated benchmark. This is just elemental solids just calculating basically The lattice parameter of the elemental ground state of the elemental solids So this was such a big effort. Maybe gw Maybe very much of a challenge That's why we basically decided to step one step back and go for for molecules because that's slightly slightly easier so let me just Just to remember you again of the two previous lectures run through the formalism again And this is basically where I usually start it will say look in concham. We know what we're doing, right? We have a density in the density of electrons is parameterized by our concham and Orbitals and we have an equation which basically couples this density To an exchange correlation potential, Hartree potential, and then we get our concham orbitals and the concham energies If you write down gw in the same way, it's not that different, right? So instead of a density we now have a Green's function. The Green's function is Parameterized or decomposed into quasi particle orbitals It's a bit more complicated and there's a sum over all states But it looks a bit similar and now we have this New equation, which is the positive particle equation again Quasi-partic orbitals plus particle energies. You see bit of a lot of a lot of agreement between this now Problem here is the self-energy. So replace this local concham exchange correlation potential by a non-local non-hermetic and energy-dependent self-energy right But if we have that we can in all kinds of approximations now calculates the corrections to the energies So this one we saw before right now. Now we understand this big advantage in contrast to the concham Is that instead of a vague unknown exchange correlation functional We have a set of equations that we can use to generate systematically approximations for the self-energy That's what we do. We cross out the Complicated term in the vertex function and everything simplifies to four coupled equations Which give us the GW approximation for the self-energy, right? Okay So now that's where all the misery or a lot of the misery starts from this thing needs to be calculated Right, so we need to calculate W. So that's the screened Coulomb potential and we have to convoluted with the Green's function And now there is like 500 different ways of doing this, right? You haven't seen a lot of them yet first one I would like to introduce to you is The exact analytic approach you can do this for molecules For solids, this is a bit more complicated, right? So what we actually do is we calculate the exact spectral representation for W You have the spectral representation for G and now all of a sudden this complicated integral just become to some do a bit of complex Function theory and then this just becomes a sum and we have an explicit approximation Well, not an approximation anymore. It's just the only approximation that sits there is the approximation of the finiteness of the basis set So then there is an analytic continuation plus moon pole multiple expansion of plus moon poles Analytic continuation basically means that we do this integration not on the real axis, but on the imaginary axis Again a bit of a complex function theory and we get to a sum and an easy integral Finally there's Let's go here. So this is the full fully analytic approach This is the the sum that I was talking about the advantages that it's exact except for the basis And it has the fully analytic structure, but it is very expensive, right? We can do this also for relatively large systems Analytic continuation We do the integral on the real axis on the imaginary axis and expanded on the real axis to get the self-energy again on the real axis again Then we have a plasma pole. It's been discussed quite quite extensively already and Finally a contour deformation, which is another way so instead of doing the first the expansion to The integral here and then do the expansion we first do the the integral in such a way that again The sum changes of the integral changes into a sum and apart from it This can be made exact that allows for more complicated self-energies But the problem here is that it introduces again a lot of integration parameters And if we want to go high throughput want to go automatic That's another problem because it's another thing we need to test for So now for benchmarking GW So we started with with molecules and in the first project there were three three groups involved in the end We had five different ways to evaluate the self-energy and we had after a project of about four years managed to get Very well converged ionization potentials and electron shepherds That was the state of affairs at that point. This is the state of affairs at present time There were a couple cluster calculations done For these molecules playing river cells by FASP and by West CP2k results. That's a local orbital code with a plane wave expansion for the density Circustic GW is running situate the set right now Multi-W run it fiesta up in it. Yumbo is also Currently preparing to start participating in this project and now in total we have even more ways of doing the self-energy Will be even one more when jumbo joins then There were also other groups that didn't do GW calculations but other kinds of ways to calculate ionization potentials that also took up this set and by now the total number of data sets is closing in on Close to a hundred sets at some point we will write another paper So this is the current situation. We have a website where all the data is present at the moment It's down because I'm writing a new paper and there's too much new fresh data to have everything online at the moment But soon it will be back And there's all kinds of tools to make comparisons between one set the other set or Subset of sets and I will show you quite a few of those right now to see where we actually are and what we can learn from all of this So this is a little bit of an overview of the molecules just to have a quick shot here That's what's included in GW 100. So it's not just your standard chemical hydrocarbons But we try to be as complicated as possible nasty very ionic systems Other ionic hydrides other Types of things maybe chemists would not even call them molecules But if you put these two atoms next to each other, they form a perfectly nice bond the timer These are the atoms that are present in GW 100. Obviously, there are some big holes here on top here. There's a big hole Also the transition metals that's that's still for not very well presented We're actually working on that so With a bit of luck in a couple of years we will have extension of the GW 100 set for transition metals so I'm going to show you a lot of plots that look like this and this is what's called the violin plot It shows you a kernel density of the distribution of a large data set So I'm going to do GW calculations not for one system But for all hundred molecules and I compare them to the results of another code So I have 50 different results for the whole most so I can do 50 times 50 different of these combinations Right, but I won't show all of those but just to give you 90 in the middle There's just understand that box plot the little white dots gives you the median value of the distribution This is the 25% and 25% that show 50% of the systems sit in this and this is just What is called the whisker and outside of this point there are statistical outliers Just to have a quick visualization of when I compare two big data sets to each other together result So the first thing to do is reproducibility Basically when I do the same thing with two different codes, do I get the same results? So we have two codes now that can do this fully analytic approach analytic expression of W convolution with G in an analytic way and then just make the big sum and you see we get a very nice agreement So 80% of molecules are indistinguishable Same homo energies, there's a few outliers But we basically know what's what's happening. They're very complicated systems and you see this is my favorite system So if you want to calculate one molecule try to calculate brilliant oxide timer We'll give you the most problems Okay, so Then next bit of reproducibility Let's take all the codes that can run local orbitals and we do the quadruple zeta with Polarization basis set exact same basis functions, and we look at different ways of doing gw try to push them all As good as possible and you see actually large set of the molecules agree very well You see this box plot in the middle is almost shrunk to zero So 50% of the molecules are all sitting inside that little box and there's only some outliers putting outside So you can did these two codes basically do exactly the same That's the comparison you showed before new implementation of the analytic continuation in fiesta It's also doing very well at this point. So there's only one outlier here and a few on this side, but not much more if we do the par day analytic continuation with aims and manually check all the molecules for convergence, we get very close and if we do a 100 and 28 28 par day approximation in CP2k not explicitly testing convergence for every molecule. There's still also a very decent agreement So what about this par day approximation? You see I showed you already these two results if you just use a 12 pole par day approximation for the analytic continuation already there's The box plot starts to increase so a significant amount of the molecules start to not agree that super well anymore And if you just take a two pole approximation for molecules, right? Then the distribution really gets off and we also see a sliding off of the off the center now part of the of the outliers of the cases where people did not yet exactly Manually check exactly what's going on are coming from your Dyson equation solver or yes a quasi particle equation solver picking up the wrong solution so We usually discuss the solved quasi particle equation. So we have The concham orbital sent switching self energy and the exchange correlation potential But this one self energy is evaluated at the quasi particle energy. So this is already an equation that needs to be solved self-consistently, right Graphically, this means that I can collect all the linear and constant terms in the red line And then the correlation part of the self energy that sits in there is like this nasty function and Obviously if this intersection this first intersection is very close to one of those poles You can easily have a solver that accidentally hops to the other side and then converges to the other solution So that's something you need to take care of but we know now what to do an important Thing that you can do here is just check the slope at the solution that you found if it's larger than one half Usually you are in a dangerous region So next step is precision Remember, this is about what is the error in my final results due to numerical and mathematical Approximation, so I'm not talking about basis says here Well, maybe a bit about basis that's on the on the press right and I'm not talking about what kind of self consistency I just decide I'm going to do this kind of GW with this exchange correlation function as a starting point What is the effect of my? My decisions so first thing to look at these basis sets and in general when we stand want to compare Local orbital codes and plane wavecodes We need to do a basis that extrapolation for both to make sure that we actually compare the right thing I cannot do a quasi quadruple zeta basis set with a plane wave because That's something completely different, right? so for the local orbital codes we use standard type of Basis set expansion Let's let's skip the details But you can see there's different ways of doing this for different types of basis functions And we do get to the same extrapolated results Now if I look at this local orbital basis sets This is basically the error that I introduce with a basis set with a finite basis set. So this is Split valence basis set with polarization usually this already gives you a very good initial guess for your DFT calculations This is generally good enough Ground state calculations are usually very good at triple zeta with DFT calculations only The error in DFT is usually larger than the error that you introduce with a triple zeta basis set. So Never think about going beyond triple zeta for just concham calculations, but you can see For the GW result. This is about point four electron volts off on average for my homo energies and Still the quadruple zeta is still on average point one electron volt off of the Extrapolated value. So these are just numbers to keep in mind, right? And also means we have to do this extrapolation if we want to compare it to Plane wave results now we also do plane wave results and both West and VASP implemented convergence Algorithms to test and to kind of extrapolate the results to an infinite basis set limit for the plane waves And the nice thing here is that we see that The center point of these distributions now all align That's what we generally see if there is an issue with the basis of converging. We see that the entire distribution is shifted off So between AIMS and turbo mode we see a very nice agreement. So this is both extrapolated data and For the two plane wavecodes we start to see at least the material is close together and 50% of the systems fall within point one electron volt if we do the extrapolation. So that's A sort of confirmation that if we do the extrapolation correctly we get the same results between local orbital codes and plane wavecodes Now if once we have this under control we can start to compare the different ways of GW. So here we compare Plasmon pole Model This is the Habits and Louis Plasmon pole model. This is a numerical full frequency integration on the real axis. It is extremely expensive and Not so good actually These are again the results you showed on the previous graph the extrapolated plane wave results. Now. Why is this one shifted to the right? Because this is not basis set extrapolated Right This one is also not based at extrapolated. So assuming that the same shift would be here This entire thing would shift also the same way there. So main conclusion here is that a Plasmon pole module for Something with a complicated the electric function as a molecule is not a very good thing So now we can collect all of this for For many codes Let's skip through this because I have a lot of other things to do. We'll just keep this slide that gives you Some numbers you can look it up later, right? So now We get to the interesting part, right? A lot of people already asked what about starting points? Someone just called gw parameter free. Well, obviously You have to keep pick a starting point and you have to decide how many levels of self-consistency. So you implement so in principle The fully self-consistent one is parameter free, but everything before you get there still depends on what you decide to do, right? So this gives an overview of what happens if I change the amount of exact change in the starting point And I compare it to coupled cluster total energy differences single stubbles and perturbative triples And I see that there is a kind of a linear monatomous behavior So if I do gw not on top of Hartree-Fock, I overestimate my homo energies. If I do gw not on top of pb Sorry, I I overestimate my homo energies and basically This is a sliding line and somewhere in the middle. There's a sweet spot basically around 50 exact exchange. This is what is in this bh lyp Functional so that is the Cheapest fastest way to get to the best results with gw Another interesting thing is that if I start to include some sort of a self-consistency, this is self-consistency in G keeping w at the conchamp w. I see that this Functional dependent shrinks, right? So if I do this on top of Hartree-Fock, gw not gives me that bit of self-consistency pushes me Towards the center and the same goes for Hartree-Fock Pb is a starting point and I push again to the center if I do a bit of self-consistency Now finally, I already told you there's like a zoo of different ways of doing self-consistency We have quasi-particle self-consistency both g and w self-consistent But we stay in the quasi-particle picture quasi-particle orbitals quasi-particle energies We underestimate the homo energies a bit Then there's different ways of only doing self-consistency in G We can do self-consistency only in the energies. We keep the orbitals at the conchamp level This was our gw not on top of bh lip and self-consistently full self-consistency Starts to overestimate a bit so again If you really want to have the answer what's the cheapest fastest way at least for finite systems Start with quite a bit of exact exchange So this is basically what happens in the rest of the series a lot of other systems a lot of other approaches for calculating Homo energies and the monogies are also now using this data set. So now that we have the the precision and accuracy and the Reproducity on the control we can actually see Whether it is actually possible to go To an automatic gw right so in this entire lecture series this week You will be learning how to do a single gw calculation And after the end of the week you will appreciate the amount of knowledge you need and the amount of understanding about the theory You need to actually get the right numbers out So the next challenge is Can we make this in such a way that I just provide some some scripting and some fancy algorithms And I give it a structure I push the button and I get converged gw results, right So wrap all the information that you're learning here today in and this entire week into like a machinery To do this automatic and when I do automatic I really mean start from a structure without any further human intervention Maybe I decide what precision I want would be a good thing What accuracy to get to converged gw results The advantage of this is that I don't need to spend all the time manually babysitting these calculations I can do a screening for new compounds because the time that we would do one calculation even one gw calculation write the paper New paper this time is basically running out, right? So nowadays we learn Screenings try to find trends on a lot of materials and also with gw we need to do more automatic. We do more systems So we can build databases we connect data on a higher level and we can get More fundamental understanding The other advantage here is that We get more uniform results If I let the computer do all the convergence testing I cannot go into the point where I say, well my my computer resources are running out a little bit So I will call this converged Or I'm hitting the value that I want to see So let's call this converged, right? There's a lot of papers out there that kind of This is the converged result because my computer could not do heavier calculation And finally, there's also no human bias, right? That's basically linked a little bit with this So what is the problem with high throughput gw? so First problem is pseudo potentials Get to that in the next slide then in general gw calculation is a four step calculation You have to do a ground state calculation Then you have to calculate all the mc states in In a non-self consistent calculation that you have to calculate the screening and finally you calculate gw And if you want to do optical you have to do a bz afterwards So if you make an automatic scheme, you need all of these steps to be connected properly In the different calculations that you're going to run on your cluster Scaling is worse than in dft, right? So The amount of computer resources that I need for a large a system grows much faster than with dft That's also a problem. So I need to put some kind of heuristics in to guess given the size of my system How heavy is my calculation going to be? And this actually also means that there is With these also increased amount of convergence parameters, there's no safe set of parameters What do I mean by that? I mean if I do a high throughput calculation for solids With dft and I restrict myself to 100 atoms I can just say well, I know My pseudo potentials, so I'll put an energy cut off of 600 electron volts It will be fine for everything at good converge results anywhere everywhere You can say if I know it's a semiconductor. I put so many k points per reciprocal atom And that will be good enough. It will be a bit over converged. Who cares? And for all these parameters, you can just build a list Do a few tests and I know it's right if I try to do the same thing for a gw I'll put parameters in that give me converge results But no calculation will ever finish because it's too heavy The other way I can put computational parameters in that I know my calculation will finish But I will have no converge results for any system All right, so that's that's another very big challenge for gw So convergence problem This is the gap at gamma of boron nitride If I vary two parameters, so I convert Very the number of empty states I take into the construction of the cells energy And I vary the energy cut off I use in the expansion of the Of the screening and of the response function and and and the self energy So now you see maybe what's the danger here, right? Would I decide you know what I will do my Bands convergence at the low cut off I would find this curve and it's all perfect 100 is good. I will stay here Well, maybe 50 is already good And then I do the same trick on the other side To make a low number of bands put it at 50 and then I do this converter get this line. Oh, it's perfect. So 50 is good enough and also 100 For my cut off is good enough, right, but you see already These two parameters are not decoupled So that's something to take care. So I cannot do these individual convergence studies. I need to do them Coupled Then there is these computational resources So the scaling scaling is not only in amount of flops I have to do it's also about the amount of memory that I have to do So if I do this convergence study for one of my systems see that like most of the calculations finish within With only like three gigabytes of memory But I also need 14 gigabytes for a few of them So if I would take a fixed set of computational parameters like computer parameters I would need to put all of them on 14 gigabytes And I would be wasting an enormous amount of resources because these would all finish already very easy, right? So I need a system that actually Takes care of this If my calculation crashes because it ran out of memory, I need to increase the amount of memory That's something we implemented in in the ab init abby pi framework That actually works Then there is a pseudo problem. Why it's a pseudo potential problem, right? um And this is just one of the nastiest kind of of examples that I could found. This is this is uh gold chloride A bit of an idiotic material, but but let's let's have a look at it for for sure So we know like We have this same convergence problem, right? And this is just taking a standard 19 electron gold pseudo potential Now I unfreeze the 4f electrons Not only does the energy shift significantly The entire shape of this surface changes from something that goes up to something that goes down So pseudo potentials And this is still something I mean We have a set now where we think We can trust a lot, but This remained a very very dangerous thing that you really have to check very carefully and understand what you're doing um and all of this makes it If we want to write like a very general nice framework and do all this babysitting of these jobs It's very complicated, right and for for dfts Calculations this problem. So probably most of you are too young, right or not Do you know what what this guy is on? Yes, some people know right? So if you if you still remember these guys are quite cute, right? But I don't remember anymore what it was feeding or giving water But if if you compare babysitting these guys to babysitting dft calculations, this is babysitting gw calculations, right? Now you can imagine this is for a single calculation which we're doing this week Now imagine what happens if we do High-truple gw then it's about Taking care of like an enormous bunch of these creatures, right and some are bigger and some are smaller and some are more dangerous and Some have eight legs and whatever, right? So So that's what what what you're facing when you want to do high-truple calculation on gw So for the pseudo potentials we worked we worked quite hard We now have sets of of pbe lda and pb sol pseudo potentials Where we also explicitly checked that the empty spaces is fine, right? If you make a pseudo potential at some point, there will be what is called ghost state Reference states in empty space that should not be there and if you hit them No, it's fine. I will I will survive Um And I've done a lot of calculations also on solids with these We still are currently in the process of trying to do gw 100s with these as well But these are most of the time fixed. So if you want sort of potentials for For your code, this is a good place to go. We have three different formats. Which means that abinit, siesta, quantum espresso cp2k Etk a lot of codes can can use these sort of potentials except for vasma um so Just a few words on what we actually did now to to make this automation with for for gw We basically have built for for um For abinit framework, which we call abi pi and basically this makes Calculations into pythonic objects. So they have all kinds of properties all kinds of methods You can read input files write read input files write input files Read output files parse what is gone wrong to your specific calculation And you can easily make changes to the input parameters So basically our entire flow Will grow here. We first use this this tool to generate a set of linked calculations They're all connected to each other seeing this calculation needs the output from that one And this can only run when this has done and these kind of things When all of that's done We use the same framework to now take this object Which is the full simulation of a conversion study And push it to the cluster and there we basically have a point work that Can deal with different schedulers Write the job script file So that we can also make changes to it if we need more memory for instance run it communicate with it and push it back to a database when The calculation is done So basically when this process is done We push it to a database where we store the results And we also push the netcdf data files that we get from abinit into Into the database and that's That's another very important part right Yumbo is doing this this now as well right Most of the old abinitio codes They were writing beautiful text files Endless lines of code Which sometimes were slightly different than other cases But if you read them as a human being you all have always understand what's there But if you try to read these files with a computer Write the scripts to parse What happened to your calculation? What are the results? This is very nasty so This is a very important development So abinitio codes need to write machine readable formats now If we want to go to hydroput So that's a very important development here And if we have all of this you can use again a lot of the tools in abi pi To visualize and do interactive analysis and presentation of results in in jupyter notebooks So this is basically what This flow looks like And what we do is we start off on a low k-point low density k-point grid Tested this very well. It actually seems to work For this we do a convergence study for ground state parameters Basically cutoff parameter for the wave functions And then we set up a grid of gw calculations like the one I showed you before And we converge The both the number of bands and the energy cutoff for the response function and for the green's function To try to find values of these two that give us converged results If you don't find them we extend the grid and we loop again, right? If we have them we go to the high k-point grid We test the derivatives basically we test whether in The point where we think we have convergence at the low k-point grid We still have the same or a lower slope in the high density k-point grid So we just calculate four more points at the high k-point grid And then we do a lot of uh post processing So this is basically what we do We take not only one point here. We just extrapolate all of these to infinite energy cutoff and then the results at infinite energy cutoff are extrapolated to infinite number of bands And that's what we call the converged result and then we find on this surface the point For the number of bands in this cutoff that gives us A distance close enough to this converged result So that's to my opinion the only way to do this Really systematically and properly And when we're talking about high throughput calculations It needs to be systematically working for as many systems as possible, right? So it's not just one calculation. So it's paced to spend a bit more Maybe do two more calculations If that means that 10 percent more of your calculation to finish the flow You will save much more time than if you try to be very strict and try to be as cheap as possible More calculations will fail and you will have to redo them anyways So this is just showing that doing it on a low k-point grid and on a high density k-point grid actually works So this is for for gold the gap at gamma Very large gap We go from ridiculously small k-point grid for a metal to something which starts to get close to Something reasonable. You basically see that the shape of this thing And even the the width in energy that it spans Stays pretty much the same. There's only a shift It's basically coming from the exchange part of the self energy Okay, and all of this basically builds a lot on Low-level tools that are already available in the panmogen package and builds all of this on top of uh, of abi pine Now what we can conclude after we did all of this, so I use this machine rated 80 odd something solids First thing we see that there is and we do this very simple calculation just to god we need splatman pole model G not w not on top of pb I didn't dare to do anything more complicated right um concham and G not w not on pb actually have a very good correlation right if you compare them to each other There's a bit of an offset and there's a change that's the first thing we observe There are things that are known already but never Shown for this many systems Now let's try to compare to experiments and we already mentioned this a couple of times gw gaps should not reproduce experiments gw gap is an electronic only gap So even my exact electronic theory Should not match experimental gaps. It should actually overestimate them That's basically coming from zero point randomization spin orbit effects and finite property corrections. That's the three basic ones Actually means that perfect agreement with experiment would lie somewhere between These two blue lines and not that the gray line of one to one correspondence so we see If we go beyond set of usual suspects gw is not that super cool anymore as it's supposed to be when there was a hand-picked set of systems where gw was working well Finally, we also did try to do a good analysis on On what is the error actually coming from where it is We see that there is basically so in circles you see Compounds that do not contain any transition metals and the square symbols are compounds that contain transition metals See the correlation Of the error with the experimental gap is different for those materials that are just main group elements When there is transition metal elements You have a different different relation There's also A situation so the blue ones Are the ones where the lightest element is very light For instance, I have lithium or hydrogen in my system. You see that tends to underestimate This is coming from zero point randomization mainly Well for heavy elements, it's not so clear, right? So if my symbols are very large, I have a very Heavy element in my system. So I would have to include spin orbit Corrections to make a good agreement Finally Let's go to the overall big comparison again. We have concham and we have The pbe and we have godby needs g not w not the top of pbe But we also have a linear model basically When I remove the linear error from the concham results So I look what is the correlation between concham and experiment and I linearly correct for this Fortunately gw is still A bit better than just linearly correcting dft right at least this level of gw So for solace, we have never gone. Well, not in this study. We haven't gone beyond A simple one-shot gw So concluding Can we automate gw sufficiently to go high throughput? I think we're we're getting there It's much harder than doing it for dft We have to do a lot of things This is pretty much similar to dft This is also there But it's becoming more important because we have to have all these mechanisms in place to Repair jobs that are broken Have to detect these errors correctly and we have to put all these algorithms what to do when this happens in place so With that I would like to leave you with a little bit of commercial for conference. I'm organizing if you are very interested in spectroscopy There's also the european theoretical spectroscopy facility and the meeting in 21 is going to take place in our labs in leuven um Probably have a tour on the new eto second lab and then one last slide um If you're looking for a new position You may want to have keep an eye on on this site In a couple of days, there will be something interesting there Okay, so with that I would like to leave you with this slightly philosophical slide and thank you very much for your perikind attention