 But why should we do it? GW is so great. OK, thank you very much. Yeah, that's fine. Working. Perfect. Thank you very much. Thank you, Lucia. So thank you to the organizers for inviting me. It's a good pleasure to be here. And actually, I have a bit of nostalgia because I was standing here 15 years ago. And Lucia Heining was my chairwoman at that time. And the title of my talk was not that pessimistic, but it was already about GW. So it's like a déjà vu. And so let's proceed. So indeed, the title is a bit pessimistic, but you'll see I've been exploring a lot of possibilities to go beyond. And as we'll see, it's not that easy. So GW is a famous method that you all know for bend gaps in solids. So here, I took the old plot from Mark von Schiff, Gardner and Co-workers, where they showed a very nice agreement with respect to experiment for the bend gaps of a wide range of solids. And this success was due to the performance of that GW diagram. So this diagram, which combines the Green's function in black and the Screen Column Interaction W with weekly lines in red, was extremely powerful for bend gaps. But still, as you can see, there are still a few outliers that are not that good. And the question that we can ask is, can we do any better today for 14 years later? So to do so, the natural way of going beyond GW would be to add more diagrams. So first, at an introduction, please let me speak a bit about the diagrams that we have in GW and the one we don't have. So here is a series of diagrams that some of them are in GW, some of them are not. And in principle, if you follow the diagram at expansion, if you include all the possible diagrams, then you should go toward the exact answer. So in principle, there is a path, a way of selecting the diagrams that should bring us closer and closer to reality or to the experimental values. Of course, this is just a subset of diagrams. There are many more. And here, I just said a few. So let me explain how to draw the diagram or what they mean. So in my representation that I use in this talk, I use time as a y-axis. So here, for instance, this is a Coulomb interaction with a dashed line. And as it is horizontal and horizontal line, then it means that it is instantaneous in time. Then the row, as I said, is the Green's function that describes the propagation of an extra electron or an extra hole in the system. And in particular, if you find a closed loop like that, this will be equivalent to evaluating the Green's function at equal time and equal position, and which translates into the electronic density. So each bubble like that is the density. And if you have this type of diagrams like that with equal times, then this is a representation of the one-body-reduced density matrix. And with these tools, then we can build diagrams. For instance, let's take a very simple example. Here, a line that goes from one point to another one, then you have a bubble to the density, translates directly into the hearty potential, v times rho. If we just do the next one, then this one, where we have v times the density matrix, would be the fork exchange. So now that we know the grammar, let's do sentences with that. So we have a huge amount or an infinite amount of diagrams, and among all of them, all the rule that the freedom we have is how to select the one we want to keep and the one we want to discard. So when we do GW, we select a series of diagrams, which is very specific, and we discard all the other ones. So for instance, specifically in GW, we conserve all the diagrams that have electron-hole pairs like that. So these rings that you see here are electron-hole pairs, one pair, two pairs, and three pairs, and four pairs and so on and so forth. And doing this selection, we can write all of them as a summation, and we are able to perform the infinite summation so to renormalize the interaction that connects the Green's function. And this renormalized interaction is named the screen-column interaction, or it's equivalent to what we call the RP approximation. So with this, we have the GW diagrams. But remember, we discarded a lot of diagrams, and these diagrams may be important, and that's what I would like to explore. If including the other diagrams that we excluded, would it bring better results? So to explore, if we are able to get better results, then we need to have something to compare with. And let's say the playground that I will be using is the so-called GW100 benchmark that allows us to compare carefully the results on molecules, as you would see. Because there are two strategies, if you want to compare to know if you are doing something any good or any bad. Either you compare to experiment, but then you are bound to having experimental value that are reliable and that you can compare with. Or you can compare to better calculations, calculations of a higher level. And this helps a lot in doing the comparison, because if you do compare against other calculations, then you are sure to have the same atomic structure. You are sure, for instance, if you do excitations, you are sure that you don't include the relaxation, or you do include to you the relaxation of the atomic position upon excitation, and so on and so forth. So you are sure about what you calculate. Also, you can use the same technical approximations. For instance, we always have basic sets, we sometimes have pseudo-potentials. If you do two calculations, then they should suffer from the same technical drawbacks or technicalities. And so it's a fair level of comparing. And in the end, you include the same Hamiltonian or the same physics. For instance, if you decide to neglect the zero-point motion or the relativistic effects, then you would do that in the two calculations you want to compare. So that's why I like to compare or we like to compare in this field the calculations against better calculations. And molecular systems fulfill these three requirements that I have shown here. We are able to fulfill all that and provide very accurate results so that you can compare with. So the GW100 benchmark set was set up a few years back by Miquel Fonsetton and many other groups over the globe. And they selected 100 small molecules that were small or medium size. In this set, they tried to have a lot of diversity, not only the typical suspects or the usual suspects you have in quantum chemistry, but also inorganic molecules, for instance, not only organic. You have small molecules, but also a bit larger, like the DNA bases. Of course, you have atoms. You have molecules. And whether you also have some small molecules but that are more complex, as far as the chemistry is concerned, like beryllium oxide, rubidium 2, silver 2, and just a small list of the examples. And with that, so what quantity are we going to compare? And then we decided to focus on the ionization potential. So this is the energy you need to extract an electron from the system. And this you can have exponential values for the IP, but also you can do total delta ICF calculation, total energy differences of the molecules minus the molecule that is ionized with one electron less. And this you can compare to the quasi-particle energy you obtained from GW. And with this, we are ready to compare. No, not quite, because we still need a reference. And I told you in quantum chemistry, there are very accurate methods that you name copper cluster, single, double, and vertebrative triple, which is considered as the gold standard in quantum chemistry that is very, very good to calculate total energies. So we could calculate the differences. But we still have to be careful, because sometimes there are several stationary points in the artifacts solution. And for instance, here it's an example, especially when you have open shells, and we have open shells because here we have to calculate the ionized molecule, the anion, as a cation, sorry. And for instance here, this is a strike example of SO2 plus, which has two different stationary solution in Hartree-Fock, and then copper cluster is based on this Hartree-Fock Slater determinant. And if you look at the energies, this one is the lowest in terms of energy in Hartree-Fock, whereas when you do copper cluster CCSDT per the CT, then this one is the lowest one. So there are two solutions, and you have to select one. Okay, and in the existing values, people have first minimized the energy in terms of the Hartree-Fock Slater determinant, and then they have calculated CCSDT on top of that. So in the end, I forgot to mention that the energies here are in Hartree, so 0.04 Hartree is one EV. So here you have a one EV difference between the two. So it's huge, much larger than the thing that we would like to see. So when you, we have to be careful, even though copper cluster is considered as a gold standard, we have to be careful that we are really calculating the symmetry of the stationary point we want, okay? So we offered in this paper revised values for CCSDT. Once it is set, then we are now ready to do GW calculations, and the GW calculations are performed with the code we developed, MOLGW, that is based on Gaussian orbitals that uses web functions that are linear combination of atomic orbitals, and the technicalities we use as much as possible libraries that exist, like the libcint and libxc, not to redo everything. We do not reinvent the wheel. And then we apply the celebrated G0W0 procedure where we do a one-shot GW calculation. First, we do the self-consistent DFT or self-consistent generalized cone-sharp, and with that we calculate the W, the frequency-dependent W, and then perform the product here GW, okay? And with this framework, it's very easy because everything is analytical. For instance, the calculation of W is obtained by the solution of the Casida-like equation, or BSC, a beta-salpeta equation, so that we have an analytic expression for W. And if you have an analytic expression for W and G, then doing the convolution is also analytical, and in the end, we have an analytic formula for the self-energy itself, okay? So it's just we press the button and we get the numbers, and we obtain, in the end, solving the Dyson equation, the quasi-particle energy. So let's start. So I will be representing the result under these box and whisker plots that are typical to statistical data science where we show a different feature of the error distribution. So this is the error with respect to CCSDT. And I show here the mean absolute error, okay, a value, then the median of the error there, then you can see the spread of the errors with the first and third quartiles, and the whiskers also give a kind of estimate of the extensions, extension of the data, and then the points that are outside the whiskers are considered as outliers, so they are dangerous. Okay, so this was for Hartree-Fock. Of course, Hartree-Fock is not good for getting the IP or not that good. Then let's go to GW. So you see that the spread of the data is much, much narrower and the mean of absolute error is rather small, and what is very interesting that there is no outlier, no red point. So we are quite sure that even though we are not perfect, it's never bad at least across the whole 100 molecules we calculated. Then we would like to go beyond. So a quantum chemist that would go beyond would do an order per order expansion, order by order, and would use perturbation theory to the second order. Second order means that we have two Coulomb interaction here. So this would give rise to these two diagrams, which they have names because they are only two, and let's see the equivalence of that. So PT2, now you see that you have outliers and they are always larger. So the solution to that is to go to the next order. So let's go to third order, PT3. So you see that since we come to be more and more involved, there are many more diagrams that I did not plot all of them. There are many more than that. And if you want to implement that, the formulas exist and I just flashed the equation that we have put in my GW. So they were known for quite some time. They are long, but they are not that difficult to implement. And if you look at the result, PT3 is improving over PT2, but you see they are outlier and it's still not competing with GW. GW, which is only one single diagram, is still better. So what else could we do? Let's try to combine other diagrams. So if I start again from GW and I adopt a new strategy, I have GW and then I want to add some diagrams that were not present in GW first. Then, for instance, the second order exchange diagram was not there. So let's do check the combination of the two. And you see the spread is huge and that's not a method that I would advise. Then, going beyond would be, okay, second order was too strong, so I should screen the interaction. And this is a proposal by Zing Woren to use second order screen exchange, SOSX. And the SOSX diagram is more or less not that good and GW was still better. Also, we could include electron-volume interactions inside the bubbles doing a TD-hard refoc scanning there. And you see it's not too bad, but too many outliers. And in the end, the only proposal that we have that brings some, that goes in the right direction is adding these diagrams that I call the GW density matrix and I will spend the rest of the talk on that. Okay, so let me, what is inside? So to obtain this density matrix, I start from the diagrammatic expression for the Dyson equation that connects the non-interacting rings function to the fully-interacting rings function with the GW self-energy. And then I do approximations. Let's first do a non-self-consistent calculation. Here the arrow is black instead of blue for the complete self-consistent GW. And then also I do a linearization where this is transformed into a non-interacting G naught. So here the expression of G is quite simple. There are only non-interacting data that comes from the mean field approximation. And with that, we can calculate a density matrix by contracting the two times. Okay, if I do that, then we can obtain an analytic expression for the density matrix that I'm just flashing. So it's, and the expression is not too difficult. And the good thing with that is that if we calculate the trace of the density matrix, we should recover the number of electrons and that's indeed what Mauricio Rodriguez-Maiorga managed to prove looking at this equation. You can see that the number of electrons is conserved. And in practice, here it's a stretch H2 molecule. If you calculate the total number of the trace of the density matrix, doing this gamma GW calculation based on HARTIFOCK or PBE, then it's always two as expected. But if we were solving the full Dyson equation instead, so not doing the last step of linearization, then we would have a deviation in terms of the number of electrons. And if you are based on PBE, sorry, yeah, on PBE, it's terrible. So the approximation that we do, linearizing the Dyson equation, enforces the conservation of the number of electrons. So it's a funny situation where the approximation is better or in some sense, for some properties, better than the original equation, the full equation. Only the full self-consistent GW would recover the correct number of electrons. And then this expression can be of the density matrix can be inserted into the HARTI diagram, giving rise to the usual HARTI terms plus this correction. And that's the one that I included in the previous slide. Also, you can do the same for the exchange. And as you know the exchange, the HARTI potential and so on, it's very tempting to calculate the HARTIFOCK energy based on this new density matrix. And this gave us the will to go towards the calculation of total energies. So in principle, if you want to calculate total energies in GW, because so far we were just speaking about quasi-particle energy, but total energies, you need to, okay, right? So you need to do a kind of self-consistency so that you get the total energy expression which is completely reliable and uniquely defined. But this self-consistency is very difficult to achieve and that's why people have proposed to use non-self-consistent results so to calculate a one-shot approximation to the total energy. And for instance, the celebrated RPA functional can be understood as a one-shot calculation based on con-sham input for the HARTIFOCK energy and RPA correlation based on the input. What we propose here is a new expression where we use this gamma GW and we plug it into the HARTIFOCK part here and conserve the galaxy-migdala correlation energy here based on the original VINCE function. And let's see the performance of that. So this is the total energy obtained from self-consistent GW. So thanks to a collaboration with Mark Borac and Patrick Rinker. We had access to self-consistent GW results and as you can see, they differ because it's not self-consistent but the gamma GW-based total energy looks like it's better. And if you change the starting point, not starting from PBE, but starting from PBE H25, so it means with 25% of exact exchange, this is PB0, then you go closer to the self-consistent result. And if you include, let's say, 100% of exact exchange in this functional, then calculating the RPA total energy or the gamma GW is right on top of the self-consistent GW. So this tells us three messages that the gamma GW expression seems to be less sensitive to the starting point, which is a good point. We don't want to have this sensitivity and it looks like the 100% of exact exchange is one of the best starting points. And unfortunately, the RPA based on PBE is the worst compared to self-consistent GW and this RPA based on PBE is the approximation that everybody uses. We usually, when people do not specify, they do RPA calculation, it's RPA based on PBE. Okay, so as time is running out, I think I'm going to skip that. Just, I would like to mention that we have calculated a lot of molecules and now we are turning to solids. And thanks to the work of Mauricio, Rodriguez Mayorga, Marc Thoreau, and Hasan Denawi, now we have a code that is working for solids. So this is the dependence of the total energy as a function of the amount of exchange and the same scenario appears that you have a strong dependence when you are using RPA and a weaker dependence when we are using gamma GW. Here, unfortunately, we don't have any self-consistent GW result to compare with and that's why it's more exploratory and we have many other results that will be coming soon. Okay, so to wrap up, GW is really a miracle because if you are interested in ionization potential, this single diagrams does better than everything else you can do. Adding more diagrams usually deteriorates the nice agreement. So it's like, it's an optimum and if you want really to improve over GW, you will need to include many, many more diagrams but not the simple one you think of. And also I have been speaking of total energy calculation with this type of diagram and I just would like to leave you with the name of the collaborator that have participated to the work, Micky Fonsettian and Nick Datani for the GW 100 part. Marc Vorach, Patrick Rinker for the Total Energies within CERCOSIDA GW and beyond and Mauricio Rodriguez-Mayorga, Asan Dinaoui and Marc Turan for the Total Energies with gamma GW in Solids and I leave you with a commercial for the code. This is publicly available, anyone can use and it's easy to use so please do so. With this, thank you for your attention. Thank you very much. Thank you very much for the talk so I wanted to ask, so I always say that deriving the approximation from a functional perspective, let's say from a functional, it's a good idea because you have conserving approximations. Now we don't do full, I mean we don't do self-consistent calculation though. So for example, in GW zero, the number is not conserved, the number of particles is not conserved. So I don't know, I'm wondering is it still good to do, let's say to derive this approximation from a functional perspective or not, or it is better to do what you did, which is instead let's say have the number conserved in a built-in non-self-consistent approach. Thank you. So I completely agree with your question, there are two possibilities and if you are asking about my opinion, I don't know, at least with the linearized Dyson equation, we are sure to conserve the number of electrons. So I don't know about the other conservation like the angular momentum or something like that, but at least the number of electrons is conserved and that's something that reassures us very much. So first we just realized that it was conserved. So the first purpose was to have a closed expression so that we can plug into the computer without having to do any self-consistency and then once that was done, then after that Mauricio realized that you could derive the exact conservation from that. So it came as a bonus on top of that. Yeah, it's an open question, I don't want to close any door. Thank you for an excellent talk, enjoy it. The question I have is the conclusion that you arrived from the initiation potential, have you looked at the electron affinity because, you know, aside from your calculation, you could also, you know, do this systematic comparisons and so on and it would be interesting because in the many applications, you want to know where the IP and the electron. So, of course, it's just the same, we need to add an electron instead of removing and we can do it, I did it, but the problem is the reference set. So removing an electron is always easy because the system is always bound. Instead, affinities, you have to select, there are a few systems that really accept electrons and for instance, in the GW100 set, there are maybe five molecules in this set that have a positive affinity energy. All the other ones are unbound. So it's not as systematic as calculating the IP. So, yeah, I should turn to the, there is a benchmark by Noam Mahom and coworkers about affinities and I have no answer, actually. I did not do the calculation because it was not that systematic, but in the future, I should do. Thank you for the position. Thank you. I hope you can see me. Yeah, I do see you. So I just, I just wanted to ask, is there any physical understanding? Why GW is the optimal? And then it just adding second set and third set, it goes back and forth. So is there a kind of physically intuitive understanding? To my knowledge, no, no. There is no, here we are doing perturbation theory with perturbing term, the coulomb interaction, that is not small. So we don't control very well the way we do perturbation theory. We just do it order by order or summing diagrams. And this is just a kitchen recipe, I would say, that we have seen that was working. And for such a GW was introduced 50 years ago by Edine, but maybe her band before, and they wanted to solve the divergence in the homogeneous electron gas. And they didn't want to have a divergence at zero Q. And they found that this term was solving the problem. And then, okay, for total energy in the homogeneous electron gas, it was good. But then the same approximation was used to calculate excitations. And also it was used first for solid, thanks to Steve in the 80s. And in the 2000s, we started to calculate molecules with the same approximation that was devised in the first place for the homogeneous electron gas. And this approximation has been remarkably stable and worked in a range of materials and systems that we could not dream of in the first place. And I think there is no, there is just, that's why I wanted to, I had no apriori knowledge how good it was and so on. That's why I wanted to go beyond and add some diagrams naively. And all my attempts were failing. So thank you for the very informative talk. So it's intriguing. I was wondering, maybe, should we maybe change the reference like from which we do perturbation theory, looking at some molecules you have shown like, you know. You mean starting from a different starting point for DHT? Yeah, instead of the, the, the Artifox state, maybe some case, CAS. I'm happy that you have a question. So I have, okay. This is a slide. So here I did the exact same calculation, GW, based on many different, okay, PBE with zero exchange to one of the exchange. And you see if you look at the, oops, sorry. What is it? If you look at the PBE, you see you, this is just the starting point result and it's spanning a really large range of energy and you look at the Y axis. Oops, sorry. The Y axis is very large. And then this is a GW calculation on top of the previous result. And as you can see, this is, the range is much smaller. PBE starting point is very, very bad. But as soon as you go throughout hybrid functional with 50, 75, 100% of exchange, then the results are extremely, extremely good. And actually I can also flash a recent study that we made with Jeff, Newton and coworkers where we started from an optimally tuned range-separated hybrid. And with this, we obtained the minimum error we ever had for GW. So the good thing with GW is if you have a good starting point, then you have a good GW, which is not the case for all the other approximations. Okay, just about. I'll just read the copy first, okay? Yeah, thanks for your talk. Just about your reference calculation. You mentioned the problem with the open shell, the couple cluster. Have you thought about equation of motion couple cluster? Yes, there is a paper, a good thing. There is a paper by Berkelbach, I think, about the equation of motion CCSD. And the results are very similar to GW. And at that time, they had not the new reference for CCSD total energies. So they had some outliers and they were saying, okay, SO2 is very weird because we have a different result from the total energy couple cluster. And with our new results for the new reference, then this would shrink down. And so I think they are very close to GW and I should do the proper comparison. Okay. Okay, we really have to move on. Thank you so much. Thank you, Lucia. Thank you. We will talk about spectral functional. Andrea, help me. Spectral functional formulation. Formulation, got it. Okay, but he will tell you himself, right? Okay, I give a sign in 20 minutes. Thank you. Okay, do you hear me? Yeah, so thanks a lot, Lucia, for the introduction. Thanks a lot to the organizers for inviting me. It's really a great pleasure to be here. And today I'm going to present a work I've been doing together with Tomaso Carotti and Nicola Marzari at EPFL. It is a possible definition of a functional theory, not of the density, but of the occupied spectral density. So let me start to set the stage with some definitions. So the occupied spectral density, here we discuss in a discrete form can be defined based on the spectral function that is the imaginary part of the Green's function. Here, being occupied, we just take this, we put this data and being discrete, this can be written as the sum of these orbital densities that just refer to the so-called Dyson orbitals. So these are the so-called Dyson orbital densities and the Dyson orbitals are these amplitudes here that connect the ground state and then particles with the excited states at n minus one. So this said, what we want to do here is really to build a functional theory of the occupied spectral density that basically would allow us to access at the same time both total energies and spectral properties. And in order to do so, we will introduce local dynamical potential as external potential, pretty much as in DFT, you have a local potential in TD, DFT, you have a local time-dependent potential and so on. And then we are first to identify a map between densities and potentials and to prove the invertibility of the map. Eventually, we also introduce spectral condition systems, non-interacting dynamical system that helps us to solve the problem. And I'll conclude the talk presenting an approach that allows us to deal with the dynamical potential that is not a trivial task. Here is what we call the algorithmic inversion method. Okay, so this work is somehow inspired by a previous work of us that is the development of Kupman spectral functions developed back in 2009, 2010 in the group of Nicola Marzari. And these functions are aimed at restoring piecewise linearity. They are in the form of orbital density dependent functions. So they do not depend on the global density but they depend separately on each orbital density. And it turned out that they provide very accurate spectral properties. So here is the evaluation of ionization potentials for the GW100. So you see that Kupman's functions are here among the best methods even when compared to very high level theories like GW, self-consistent GW or vertex corrected GW. So this was rationalized earlier on just looking at the form of the eigenvalue-like equation that you obtain because of orbital density dependency. Here the potential is not just global to every orbital but every orbital has its own potential. So we have orbital dependent potentials, though local. And this is somehow reminiscent of the Green's function theory where the self-energy in a quasi-particle approximation also becomes orbital dependent though non-local and non-termission but as pointed out in this work by Gatian Reining by 2007, somehow if we are interested in describing the spectral function we do not need to deal with the whole complexity of the non-local non-termission self-energy but it's enough to deal with the local dynamical potential. So somehow this local orbital dependent potential we obtain here can be interpreted as quasi-particle approximations to this potential. So somehow they have the flexibility to provide us with the right information. And in doing so we also have analyzed the properties of the spectral potential. And a very nice one is that there's no explicit need for a derivative discontinuity. Here is a dimer in 1D dissociation and we know that in exact connexia we have the built-in of these steps in the potential that are just signature of the derivative discontinuity is a feature that has to be there in connexia. Well here we do not, we don't have any so somehow this is a good feature of this potential. So pushed by that then we decided to try to take this formulation farther and properly define a functionality of the spectral density here. There are a number of existing formulation ranging from the spectral, the DFT from Savrasum and Kotler that somehow where the basic variable is the local Green's function and this underpins the MFT and then has been used also to in the context of DFT plus the MFT or GW plus the MFT typically aiming at strongly correlated system. Then there's the spectral potential formulation by Matteo Gatti, Lucienka works that I've already mentioned and also was recently or connect to the idea of connectors that was recently provided. And also there's another series of works by Stefan Kulth, Stefan Ucci and Jacob where they've introduced this I, DFT so an extension, a functional extension of DFT where they are able to perform sort of STS virtual STS experiments that allow them to extract spectral function information. So somehow this is the scenario and here is a sketch of a very broad electronic structure landscape of method and we are just interested in this hierarchy here of spectral methods that range from DFT and then climb up up to functional method of one particle Green's function to particle Green's function so on. Here is where the MFT is sitting and somehow this first step could be seen as sort of the Kuhlmann's function as our orbital dependent just sit here and instead what we want to look at today is here. So this is a functional theory of the occupied spectral density pretty much in the spirit of a functional theory of the Dyson orbital densities if you want and we want this to be variational and as I said to access simultaneously energy and spectra. So in order to do that we start by looking at the external potential. So what is the natural class of external potential that are addressed by this formulation and here we want to the external potential to couple directly to the spectral density. So here the external potential is generalized to the class of local and but dynamical potential. This could be if you want understood from the fact that the spectral density, Dyson orbital densities are actually a partition of the total density. So somehow if you want to couple separately to each spectral density you need a potential that is more general than a standard local potential and here is the mapping between densities and potentials also in the discrete form and yeah this is useful for a better formulation of the potential density mapping but it's also required as I'll show you in a moment to properly define the so-called spectral connection system that will turn out to be dynamical. Okay here is the, if you want the overall picture starting from DFT to TDDFT where we see that basically the density is time-dependent so we have these two fields operator evaluated at the same time and that will allow us to access neutral excitation in the case of the spectral density instead here is a main point the two fields operator are evaluated at different times so meaning that after I destroy this electron then there's a finite amount of time where the system has a dynamics with N minus one electrons and that's why here we embody if you want the information about charged excitations and we can go up a bit beyond this if you want and in the interpretation and if an electron has to go outside of the system basically it would go in a bath and would spend some time in the bath before going back to the system of interest and this picture here is just described by potential that in time domain by the embedding potential that in the time domain is actually time dependent and then would become frequency dependent when going to the frequency domain this was just descriptive so far it can be made formal and here basically I'm showing that it is possible constructively to provide or to define a class of systems that are subject to local dynamical external potentials and we do so using local embedding so here I'm ad hoc constructing this class of system that are going to be the subject then of the theory these black dots here are the point in space and real space of the system of interest and then this weekly line here is the interaction among electrons that is only present in the physical system of interest and then we have a set of hidden degrees of freedom these open squares here that are in the bath and are coupled locally to each point in space pretty much as we have multiple layers of extra degrees of freedom it is possible to write the Hamiltonian formally not going through the detail what is important here is that if I'm computing the embedding potential of the system of interest here it turns out to have this form to be local by construction of course and these residues and poles just come from the opening matrix elements between the bath degrees of freedom and the physical degrees of freedom and the poles are just the energies in the of the bath degrees of freedom okay and of course this dynamical potential that we can see as an external potential if we are computing the Green's function for the system of interest becomes the embedding potential in the green side that needs to be also complemented by the electron-electron self-energy because of the interaction in the physical system important this is useful for physical insight if you want just to explain what the local dynamical potential is in a physical system provides us with the wave functions and all the machinery related to wave functions that is very important and this also is what we are going to use to prove the invertibility of the one-to-one mapping so physical insight but also technically useful for the invertibility out of these we can evaluate total energies for the complete system and partition the energy and system of interest plus bath and we see here appearing as term related to the external potential basically the coupling between the Dyson orbital densities and the dynamical potential evaluated at the Dyson orbital energies and this is complemented by a piece of energy in the bath that is related to the frequency derivative of the external potential that is somehow reminiscent of the renormalization factor in many bodies this can be obtained also from the Galicki-Migdal expression though it's more interesting to include variationality using instead the Klein function that this can be done though at the price of a different partition of the energy between the system of interest and the bath and this is the expression we are going to use later on it's pretty similar to the expression used by Savrasov and Kothlier in their 2000 work paper and except that here, yeah, potentially strictly local and importantly if we take the variation of this with respect to G the functional become stationary at the solution of the embedding Dyson equation that I've just shown as for regular Klein functional. Okay, so this completes the description of the class of systems potential we want to discuss also an important remark is that that class clearly contains the class of local static potentials that we are mostly interested in those of the physical systems. Let now go to a second step that is the definition of the potential density mapping and the invertibility of the map. Here is our dynamical potential and the density the density once discretized can be described by a set of Dyson orbital densities and Dyson orbital energies and the potential can also be uniquely described by a set of sampling points in energy of the value of the potential and the average value of its derivative. There's, in order to do this uniquely there's a condition that is that the number of poles in this potential here should be lower than the number of sampling points that is the number of occupied spectral points in the spectral density and ideally so we want to work with as few poles as possible here, this may be an issue so this need to be controlled when dealing with non-interacting systems since the number of Dyson energies is somehow limited while it's not expected to be an issue so it's a really mild condition when dealing with an interacting system because there the number of Dyson orbiters is really huge so somehow we are not really limited here in the number of poles we have to put there. Here is just for the sake of description the case of a non-interacting system, a toy model where we have just one occupied orbital it's non-interacting and here we break the condition so here we have multiple potentials actually describing the same spectral density occupied not the empty one but the point is that in the non-interacting case these forces, the potentials to all have the same value and the same average derivative at this point in space so somehow these two values are uniquely determined that further pushes the idea of building the one-to-one correspondence in this form. Okay, formally in order to prove the invertibility of the map typically one needs to show that the determinant of the Jacobian of the map is non-zero and by I mean using linear response somehow this means that if we take the variations of the potentials here and here we put the variations of the densities we just need to prove that these linear response matrix is invertible or there's no zero eigenvalues and here is the formal statement that is that the map is actually invertible here I'm just sketching the proof making reference to the embedding construction I've just introduced here is the Dyson orbital density expressed in terms of the many body wave functions instead of using these quantities here I'm introducing auxiliary quantities and here N is an arbitrary integer and this just amounts to taking combinations of the spectral densities and orbital energies over summing over all Dyson orbitals so it's more convenient to work with these objects and also since these just depend on the spectral densities when taking the variation of the spectral density to zero so the variation of these objects need to go to zero so we can evaluate the variation of these objects that involves the variation of the ground states and so on then we set to zero the variations in the spectral density and by exploiting linear response theory one can arrive at this condition here that is that the variation in the potential in the embedded formulation matrix element between ground state and any excited state is zero needs to be zero and this in turn basically implies the thesis that is that the variation of the external potential is actually zero besides some trivial variation like a rigid shift or similar so this concludes the proofs of some how we have proven the invertibility of the map and here now in order to proceed to a cone-champ spectral construction we introduce an auxiliary non-interacting though locally dynamic system that we used to represent the spectral density and here we follow the idea of Gatian Reining again so we have a non-interacting system subject to this local dynamical potential and out of this we can take advantage of the climb expression for the energy of the system of interest that we can now express in terms of the non-interacting spectral potential representation for the spectral density here is the expression this is a regular climb function except that here instead of the phi term we have this exchanging correlation art exchanging correlation energy term that depends on our representation of the spectral density and important here that's an explicit expression and this contains some corrections to the band energy also something related to the kinetic energy I'll comment about this in a moment and important this is stationary with respect to GSP and yeah here I'm spoiling and here basically making the function of stationary we see that we obtain some cone-sham-like equations though with a dynamical potential where the orbiters are just the spectral the Dyson orbiters used to represent the spectral density and this potential is just the derivative of this exchanging correlation energy term importantly we expect that kinetic energy corrections here are small somehow because as for reduced density matrix functional theory we have a quantity that is more general than the charge density so more exact quantities can be evaluated for instance the kinetic energy in reduced density matrix functional theory is exact here under mild conditions such as that the Dyson orbit and not to be degenerated is also exact exploiting the von Weißacker functional to express the kinetic energy okay now so our perspective is that we would like this functional to be implemented in our favorite EFT code with all the technicalities we are used to and here comes the issue that we have to deal with a dynamical potential that is something not trivial at all in principle and we have issues in representing the dynamical operator and also in solving the Dyson equations in the presence of this dynamical potential our approach here makes use of some of our pole representation for all dynamical operators and propagators this is class of the approximants and also allow for analytical expressions to be easily evaluated by residues but importantly this also allow for a formally exact solution of the dynamical Dyson equation in fact these equations can actually be cast in the form of nonlinear again value problems the usual one and one can show that using some of our pole representation for the self energy actually this nonlinear again value problem can be solved analytically by diagonalizing a linear though larger problem so the problem can be linearized at the price of dealing with larger working in a larger space and this is the core result of the analytical inversion method that given a self energy in the form of SOP just provides us with a green's function in the same sum over poles form here is how it works so somehow out of the SOP ingredients for the self energy we can build a larger metrics and after diagonalization we can just build out of the eigenvalues the pole and out of the eigenvectors the residues and this is described here but this was also used in the context of dynamical field here in a similar form and also more recently the context of GW and beta-cell-peter this was applied for demonstration purpose to the case of the homogenous electron gas treated at the G0W0 level allowing us to obtain at the same time both spectral density and total energy with this let me conclude so in this work we have formulated a functional theory of the occupied spectral density we had to introduce local dynamical potentials by means of local embedding formulation and the theory is paired with the construction of a spectral Conexian system where dynamical Conexian equations have to be solved we do this using the algorithmic inversion method that is very key to the solution of these problems this function in principle comes with the promise of a number of positive features so no derivative discontinuity we have a function that depends on a quantity that is more general than the charge density so we have more quantities known explicitly and development of functions in in progress but here I want to stress that somehow the Koopmans compliant functions can already be seen as an early attempt to describe these functionals and with this let me thank my collaborators to send you for the attention thank you very much I'm sure we will have a lot of questions you were running very fast on the last slide with the result of Ray Trongas eventually it will be interesting to see what this is an attempt of application this is an attempt of application can you comment on these results here basically we have been applying these approaches here the self-energy so here we work in the framework of G0W0 in the framework of Green's function so we are not using the spectral DFT here we are just this is just a demonstration of the algorithmic inversion method applied with some of our poles and one basically can obtain really accurate spectral densities and total energies at the same time this was the demonstration point one of the key messages here yeah, yeah, yeah we have further questions so I assume that one aim of this exercise is that to obtain this potential for the spectral density in somehow applicable forms or something that you can like we have the LDA and GGA potential for so what is the idea to reach this aim or do you have any scheme in mind to produce such a function I think there are mainly two ideas and then of course many more to come the first one is that in our earlier work in 2014 I've just shown a couple of figures there but we were actually showing that the potential obtained out of the Kubman's construction really resemble or share some features with the exact features of the spectral potential computed for the same system so that is very positive to us meaning that the Kubman's construction that we have can already provide some approximations for the spectral potential first of all that says that it's not impossible so there are no super pathological features it's not impossible to obtain approximation of course this has to be further developed second thing is that it's very tempting to build on the earlier work by Shaman Kone in the 1966 paper where they were attempting to build local self-energy out of the homogenous electron gas and that work was also further exploited by others later on probably not too much but that is also the second source of information or strategy if you want that one could fall and then there are all the connectors during formulations in the curve you have just shown in the application the exchange part is the same for the Perdut-Sünger and your calculations because you show calculations and there is a shift between the two curves and where does it come from? let me see you mean here so this is a Perdut-Sünger fit because this is so the Perdut-Sünger fit is somehow the exact energy while here we are computing the energy at the G0W0 level so that is not meant to be identical to the exact quantum Monte Carlo energy if Rex Kodby's results show that if you were to go self consistent with GW would have been very much closer comparison but this is just G0W0 and we compare with the existing data in the literature so again this is an exercise where we show the accuracy of the procedure because it's not trivial what is not trivial here is to obtain at the same time accurate spectral densities and accurate total energies okay maybe if there is no further urgent question I suggest that we thank all the speakers of this wonderful morning part and for the good discussion we will assume at quarter past 11 quarter past 11 after the coffee break ah sorry okay I think we can continue so we will continue hearing about excitations but this time it's an ensemble density functional theory and the talk is given by Stefano Pitalis okay so I hope it was so okay then let's specify even further the excitations this is a ground state closed shell two electrons get excited a jump up or a bit more spin might flip or not and here there are a double excitation two electrons get excited together so I'm not going to speak about collective excitations so much today but even these excitations so these are single particle levels we fill up our let's say just later determinate then we open the shell and so on and so forth now the problem is that there are some challenges TDDFT, adiabatic TDDFT the one that we use typically performs quite well for single particle excitation but actually doesn't capture double excitations there you need memory to the best of my understanding so we would like then to find a way using density functional theory or an extension of DFT to address this type of excited state problems and under some conditions so we should do better than adiabatic TDDFT if possible treat excited states on equal footing with ground states in the spirit of computing total energies not densities and from densities maybe something else and also to preserve the success of DFT on the ground state to upgrade it on excited states and then to reduce numerical effort in the sense that we should really keep the effort that we have to put into it should be minimal otherwise maybe I should do some other methods so to stay within a DFT approach so I tackle this challenge I'm tackling this challenge by joining forces with collaborators here, Ting Gould from Australia, Griffith University here happily pointing the list of papers I'm trying to speak about today and Paula Gori-Georgi Derk Koi former PhD student now researcher and Leor Kronig and Gianluca Stefanocci so let just a quick showcase to keep your interest focused to this talk in the live so we can come up with an ensemble heart rate change, the capture without any problem, charge transfer and spin splitting ok, of course at the level of heart rate change then you have to add correlation at some point if you want a quantitatively correct results then we can compute double excitations at the level of PBE, PBE 0 and symbolized GGA no memory involved we can dissociate H2 you see, we stretch H2 and we get good total energies for the ground state as 0s, it's a singlet and the low-lying excited singlet here and therefore also the excitation energy comes out pretty well so by symbolizing the so-called interaction strand interpretation, something that Seidel Perdue could did a long time ago we symbolize that and we can do this for the excitations now, how did we do this? via ensemble DFT so ensemble DFT was formulated a long time ago but there were problems to be addressed to learn how to use it properly and there are other orders that have done very important works in these fields like Filatoff, Pernall, Guidovulus I'm going to focus what we have done I and my collaborators here mostly now so the point is instead of working addressing specific states we rather shift on an ensemble of states so we group the states we are interested in into a mixed state and we assign some weights to these states of course whenever there is a degeneracy it's better we assign to these states the same weight, for symmetry reasons and then we come back on this a bit later and we can do this state average calculation this is nice because you see what is required that you should not put holes here at least within each symmetry class and then this looks a little bit like what you have in finite temperature distribution but this has nothing to do with finite temperature you can regard them as auxiliary weights that you introduce for the purpose of calculating this state average energy the ensemble, the energy of the ensemble and from this then you can extract the energies of each single state by doing more than one calculation you can take differences you can compute derivatives with respect to the weights and you extract the energies you want so why we do so? because there is a variational principle for this type of ensembles so we can regard to this mixed state as a kind of ground state this is a variational principle you assign the states you want to address with some weights everything in this talk depends on the weights but one last thing later on so sometimes I emphasize the dependency on the weights sometimes I drop it but the calligraphic E is the energy of the mixed state of the ensemble now this ensemble here should be you can put your triad orthonormal way functions and you will see that the ensemble conscience helps very much in satisfying this requirement trivially of course ok we never specify this when we've read down this stuff but we have to always pay attention to boundary conditions, symmetry conditions if we want to make sure that we know which state we are approximating and here this was formulated in the 888 this variational principle I missed the name of a cone somehow bronchette and paste by Gross, Oliver and Cohn now then you see we can do the same trick we do for the ground, the actual pure ground state and via constant search I can build up my functional of the ensemble density right and therefore I can make this assumption of non-interacting V-representability for ensemble actually is even more safer because we know that non-interacting V-representability is kind of problematic if you restrict to pure state you do need ensemble to make it stronger and you see then your conscience system for the ensemble will provide you a set of orbitals a common set of orbitals from which you can build up the conscience state so this double S is S for state a little S for conscience single so these are many body states that are built from these single particle orbitals and you have to keep them simple these states they should be like a conscience state and then thanks to this you compute the ensemble density not the density of each state but the ensemble, the density of the ensemble and then you can plug in the ensemble density into the functional and you can get the information you want about the interacting system now, so again, so it's very nice because this looks like what you would write down for a ground state here there is the calligraphic letter instead of standard capital letters like this is T of the conscience the kinetic energy of the conscience system for the ensemble the external and the energy contribution due to the external potential this is an ensemble density and here is the Hartley change correlation functional for these ensembles now, you see, I write them all together I don't split them down we come to this in a minute this is the effective conscience potential everything as we do for the ground state and here when I compute the ensemble density I use the mixed conscience state but no question, Marx how do they look like? now, let me write down what it's natural to write down the very first well, let's say you have the Hartley at the exchange that we know then it's very spontaneous to plug in in the Hartley that we know the ensemble density and here the ensemble one body reduce density matrix now, this is unfortunately very problematic because the Hartley it has not only the self energy of the electron but also the ghost interaction like this is a spurious interaction the electron this electron is occupying a state for this state and another state in the other state of the ensemble so it's a self interacting also it's a mess and the exchange functional does not cancel this ghost interaction so you need to work at the level of correlation to do the right thing so it's even more difficult to derive an approximation for correlation then, okay, let's do another guess here well, I have to compute statistical averages so better I compute the average of each Hartley energy for each state in the ensemble, right? and similarly for the exchange problem solved for the ghost interaction or almost because whenever I've read this actually I'm making a slightly strong assumption of the form of the conscious state that I'm going to speak about in a minute and if you look at this you can certainly use this expression if unfortunately if you average out single triple splitting typically instead you want to resolve the single triple splitting plus you can assign different energies to different densities so there is a no uniqueness problem as well with this kind of expression so how do the exact density function for ensembles look like and how can we construct approximations so this is what we need to work out so to do so, what we did was to take a step back and consider an adiabatic connection for these ensembles so we switch off the interaction and go to the non-interacting limit trying to follow, sorry trying to follow the states properly as much as possible so you realize that actually you need to work with symmetry-adapted states certainly the exact state are symmetry-adapted but approximate state then should be better symmetry-adapted, right? so the zero limit lambda equals zero so lambda is the strength of the interaction lambda equals zero is the consham non-interacting limit because this effective potential here v or lambda must keep the ensemble density constant, right? and okay, here the full interaction lambda equals one so if then we derive the Hartree change functional and we do then this difference divided by the strength of the interaction you see this difference goes down to the electron-electron interaction evaluated at first order and that's fine as long as the interaction is small then if you do this limit properly you end up with not the definitions that I showed you previously but first of all there are the consham states that are not anymore single slather determinant but actually they are configuration state functions so it means that if you work with a closed-shell molecule like H2 okay this is closed-shell the ground state but as soon as you excited electrons you end up in open shells and you see this is a singlet two slather determinants in triplet you may choose the one with the singlet but you may not this is two slather determinants okay and the Hartree and change the con joint quantity they come together because this is nothing else but the electron-electron interaction evaluated on these non-interacting states these non-interacting configuration state functions and then thanks to this we come back to what I said previously then you fix the spin-splittings you don't need to do constraint here, calculations and you can capture of course charge transfers right and then if you want to do quantitatively even better you have to add correlation now at this point of the talk I hope I convinced you that Hartree change has an exact form that you can derive you can convince your and they come together right so the next part of the talk I'm going to say you have to split them apart you have to extract an Hartree and an exchange why? because the approximation we have they are done like that you have an Hartree and exchange as something else and to know something it's very useful not to trash everything has to be done because it works pretty well for ground state so we want really to start from there and upgrade these approximations so how do we do that? well let's go back to the adiabatic connection and use of the fluctuation dissipation theorem we learn this from ground state DFT so let's repeat this for ensemble DFT so we have to use the fluctuation dissipation theorem for ensembles in this case and now we see the exchange functional it's I specify here FDT to make sure that it's the one that derives from this procedure it's beautiful because it looks like the one of the ground state but the only difference is that this is the response function of the ensemble density of variation of the Gaussian potential and this is the ensemble particle density but if I didn't tell you this it looks like the one of ground state so we are on the right path maybe the approximation for ground state might work well here but then the Hartree looks like completely different and I'm going to show you how and of course it's the same when you reduce the ensemble to a ground state it will be the same as the one that we know it but as soon as you add stuff it's not the same so there is this PS what is PS before I speak about correlation PS is P for lambda equals 0 so it's the P the per correlation for part of the per correlation it's a part for the Gaussian system you see here you have products of densities so this state average of product density this is nice because it will produce the Hartree for each Gaussian state and then you do the state average of this contribution this is what we were guessing at the beginning but here there is something that we didn't know about it before and it is specific to ensemble it is required for ensemble and now this involves transition densities because you see this is the density operator and you have to take this expectation value between different k states these are the pure state in the ensemble the lambda dependent k states ok so and then if now you look at correlation finally you see the part with the response function is the usual one so the difference of chi at lambda equal 1 and chi at lambda equals 0 response function of the ensemble of course then here you get a new type of correlation we didn't know about it which is actually this new type of correlation it's a density driven correlation so let me try to explain what it means this density driven correlation and this is nice you see this part pair naturally with the exchange part this part pair naturally with the heart part the extended heart rate so let me tell you this about the density driven correlation is comes from the fact that although the conscious system reproduces exactly the ensemble density does not reproduce exactly the density of which pure state in the ensemble so this is a dramatic example you have a triplet singlet now you can build the triplet singlet by using the same single particle orbit the same home or the same room so a density does not depend on spin and in fact you will end up with the same these two you can build up an ensemble with these states you get a single have the same density this is not a feature of the exact solution ok so and here in this plot I show that density driven correlation are quite substantial but they don't interfere so much when you have to solve the strongly correlated problem this is nice and then here I want to just say very briefly that now you have seen response functions there but when we there are approximations we would like to avoid them because we don't want to deal with unoccupied orbitals and fortunately the ensemble exchange that I showed you before just depends on the average occupation number but then also at the level of correlation you can just modify the PBE in this case and take weighted averages or modified GGA's don't have time to tell you how we modify these GGA's but let's say we don't use unoccupied orbitals ok here it was an example of a hybrid with this modified function ensembleized function we can do really well on double excitation even for molecule like a nitroxyl that are quite strongly correlated because a question of motion couple classes single double that's very bad you have to add some triple correction to do better and still our ensembleized GGA works well we didn't fit this case ok what about strong correlation now ok let's follow what Wigner did look at the gelium high density limit this is a weakly correlated state low density limit this is strictly correlated state ground state DFT allows us to upgrade this argument to all inhomogeneous system so we repeat what we do for ground DFT, ensemble DFT and we extend this idea to all inhomogeneous system via scaling so we scale the wave function uniform coordinate scaling prefactor uniformly this prefactor for normalization to keep normalization constant so you scale the density scaling parameter high high density you see gets picked scaling parameter sorry low density flattened down and what happens to this ensemble functional well for the high density limit it happens what you would expect the heart rate change correction function when you scale the density it becomes just the heart rate change but now what is amazing that I want to point out which is really important is the following that in the low density limit actually the ensemble functional goes to the leading order to the ground state functional for strictly correlated electrons you see here in this minimization constant search the T is not there anymore you just minimize the electron interaction so here this is the strictly correlated function for strictly correlated electrons but what is good is when you address excitations strongly correlated excitations fortunately we are lucky because at least in the leading order it is exact to reuse the ground state functional yet if you evaluate on the ensemble density so things in a way strong correction are difficult for TFT but in a way in this case are kind of simpler even right so and then that's how we got the result I pointed out before right so we assemble the approximation derived in 99 by Seidel, Perdu and Court and then we see that it works pretty well for this ground state and the low light singlet of H2 okay more works needs to be done of course more needs to be explored but we believe we are on the right path so thank you thank you very much Stefan and we have time for some discussion just curiously about the expression for the correlation function which in terms of the response function and in the case of homogenous systems like homogenous electron gas don't you have divergences in the expression are they regularized or something yeah this expression just a second yeah this one okay so actually I'm not summing over all the unoccupied states here actually maybe here it's a bit misleading the notation I'm just dealing with the contributions that comes out from the a few states I added in the ensemble and actually I'm addressing just low line excitations but just homo plus one lumo minus plus one plus two like very much so we have been looking at this stuff already the homogenous gas is not really a finance system right so and the methodology even the formulation the exact formulation of ensemble DFT is pretty much tailored on finance system I would say so there is work to be done on the gas actually we are working on the gas but probably your divergences would of course if you let's say the work around is that you are we are very specialized on a few states okay so thank you for the nice talk and if I correctly understood there are some problems in trying to apply naively the R3 functional when you have these ensembles and this reminds me to when maybe one use real time density functional real time time dependent density functional theory for a pump probe experiments in which there are populations so do you think maybe there can be some problems in evaluating the R3 energy in that case okay the problem here it was of the approximation of the R3 the exact R3 the one that I wrote down later does not have a problem actually it's also nice from the point of view symmetries, preservations now your question you go much beyond the framework where real time TDDFT and there you have a pure state that produces the conscious that the density of a pure state is evolving in time so there the heart has no problem at all but you might want to do an ensemble version of TDFT and people have already formulated that sometimes and I'm kind of intrigued at some point to give a look at this do we have further questions two, okay thank you just curiosity how much more expensive is to do just operative operatively how much more expensive is like to do the calculation with ensemble DFT and TDFT compared to the okay yeah good thank you for the question like since we work only with occupied ground state so we don't do really I see that whatever like it's just it's expensive as much as repeating as many calculate ground state calculation you need to extract by a differences of state average the energy that you it's like repeating N times ground state calculation as if but then there are also tricks to address by a derivative of the weights and if I pronounce correctly his name he propose also Kiron propose something interesting along this line to make things a bit even faster yeah I might have missed the point or maybe it's a trivial question but so in your ensemble the weights are all ordered somehow right so in this case then you need to avoid that you have to remove this condition yes no I did I was just very fast you didn't miss anything like yeah let's say it's a bit tricky in the sense that let's say you put let's say I want to address this think about this atom ground state the excited state the other excited state still P shell right no so and I have to pick up all the eigenstate the one that transform according to correct irreps right and then weight them so the general weight the general state I weight them equally so I should avoid holes yes because I want the variational principle that collapse to the right ensemble state not to the wrong state so but you can also have holes as long as they are in the different symmetry class you can play I can address just singlets if I want and then only triplets or I can address singlet triplets if I wish so and then I can play tricks also with the exploit not tricks I just can exploit the symmetry of the spatial part of the function right the point group so to address a minimal set of states the one that I'm interested in so to also not spend too much computation of time okay I think we could learn much more from Stefano but we should stop I did learn a lot from this conference thank you so let's thank Stefano again because otherwise someone would take it okay so I start so I think everybody's all the participant the speakers I think that there was an interesting conference I don't have many final remarks the only thing is that I think that the model of having this self-proposed invited to me was successful so I think that the committee probably will use it for the next mini and maxi and so again I ask everybody to spread the information regarding this possibility that will be used in the future version of the total energy the other things that I have to announce is that the next mini will be in Shanghai obviously unknown related to the COVID the world situation let's be optimistic so probably soon will be an announcement more specific the date will be same of this week of January in two years we will have the new maxi there are people being contacted for the organizing committee but we don't have all engagement so we don't have yet the announcement of the next organizer and okay that's all perhaps I thanks everybody again I let the world to you have something to say and finally Nicola will thanks all the people that help actually one thing I would like to thanks a lot Nicola because Nicola did a lot of job so so thanks to him so he did most of the work so thanks to him that it's possible so okay yeah you know I think Francesco has already said everything that needs to be said but I really would like to thank all the speakers because this was as Francesco said the first kind of experiment where in addition to the invited speaker from the people who also submitted some contributed application they were picked up as invited speakers and so that gave the opportunity of a lot of young people I think and well I mean I could see sincerity in the talk they presented all of them may not be in the same level but you know this entire tradition should go on from one generation to another generation so it's great that this practice has started and as Francesco said we do hope that this tradition will continue even in the next meetings and I think formally Nicola will give thanks to all the people involved with this greatly successful event it just went so smoothly we were just talking today you know we had some last minute cancellation that made us very very nervous but we made it and I think it has been a very very successful event and for the participants if you have any specific comment or suggestion don't hesitate to write to us and we will pass it on to the next organizers now. Thank you, thank you so the workshop is basically over but remember that it's not really over because most of it is actually on YouTube we've had up to 70 participants following the live streaming so it's also an important addition and with this I would like to thank all the people that have made this possible you for coming and for giving excellent speeches and posters my co-organizer Francesco Mauri the scientific advisory board with the chairman Emilio Artaccio but I have to correct Francesco one thing I'm not the one who did most of the job because most of the job was done by the people behind the scenes and in particular by the secretarial office and I have to thank Victoria Alvova in particular the IT team led by Walter Stock I don't think I'm sure somebody of them hears me so with his team and last but certainly not least I have to thank again the sponsors so beside ICTP this this event was recognized as SICAM flagship event and we had the support by Psy-K which was very very important for the success and also by the local SICAM CISA node and of course by the Quantum Express Foundation for the Walter Cohn Prize it's a very right moment so thank you and have a safe trip home okay I just wanted to propose that we thank the organizers who cannot thank themselves okay was really great wonderful discussions very good atmosphere I think that's very important