 So, good morning everybody. I will be chairing the session today up to the coffee break and let me go straight to the first speaker who is Stephen Louis who doesn't need any presentation in this community and he will tell us about election correlation in multi-particle excitations in time-dependent phenomena and superconductivity in materials. Is this on? Could you hear me? Thank you very much for the introduction Amelia and first of all, I'd like to thank all the organizers for inviting me to give this morning's talk. So what I would like to do today is to share with you some of our recent work using many body perturbation theory to study electron correlations in a couple of topics, multi-particle excitations, time-dependent phenomena and superconductivity in some materials. So this is going to be the outline of the talk. I will begin with looking at interaction and time-dependent phenomena in photo physics of atomic wave thin 2D crystals. That will include looking at correlated multi-particle excitations such as trions and bi excitons and exciton-enhanced nonlinear optical responses such as Schiffkern and then we will look at how correlation influence experiments such as results from experiments such as pump probe transient optics and time-dependent angle result folding emission spectroscopy. And finally, as the last topic, I will look at the role of self-energy effects in the electron phonon coupling and superconductivity. So let me begin with this topic of try to calculate from first principle using many body perturbation theory of non-equilibrium phenomena and excitations go beyond one of two particle excitations. So if you want to look at this kind of phenomena, one has to go beyond the standard GW and GW beta-sculptor equation approaches. For example, if we're interested, oops, let's see, if you're interested in correlated high number of particle excitations such as trions and bi excitons, then one has to actually investigate the interacting three and four particle GWINs functions. And if you're interested in time-dependent and high-fuel phenomena such as those measured in pump probe experiments, nonlinear optics, or even fuel-driven transitions, then one has to look at the time, explicit time propagation of the GWINs function. For example, looking at GW approach and then look at this time propagation using the non-equilibrium GWINs function formalism. So let me begin by looking at correlated multi-particle excitations in 2D systems. As you all know, interaction is enhanced in one and two dimension. So it's not surprising, experimentally, one find that multi-particle excitations are actually prominent in those measurement of those systems. The example I give you is bi exciton on monolayer tungsten disilinide. Plotted here is the photoluminescent spectrum, and what you see here is this this peak label XX, which actually grows with the intensity of the excitatation light. And this peak has been identified as a bi exciton, a combination of two excitons, and it has been well studied experimentally. And this kind of phenomenon, in fact, are of fundamental and practical interest in a number of other areas, such as exciton condensate and deficient of a single exciton into two excitons in organic crystals for photovoltaic applications. So in terms of theoretical approaches, you were interested in the quasi-particle excitations, you had to look at the single particle GWINs function, say within the real approach that you solve this single Feynman diagram where that dashed line is the screen-codown interaction. For optical property, including electron hole interactions, then one has to solve two dot Feynman diagrams using the beta-sulphide equation approach. For bi excitons, which is a four-particle excitation, then, in fact, one has to look at soft something like diagrams that look like this, and there are 36 such diagrams involving four particle lines and their interactions. This looks formidable, but in fact, recently we were able to formulate it and develop the computer code to do such calculations. So let me give you some results. This is to look at trions and bi excitons in monolayer tungsten diselinae, which is a semiconductor and the low-energy excitations in this semiconductors are at the K and K prime valley, just like graphene. Here are the calculated results compared to experiments for the binding energy of a negative trion and bi exciton. So there are no adjustable parameters in the calculation, and you see that they are actually rather good agreement between theory and experiment. In fact, these are the first fully-abinitional many-body calculation of this kind of excitations. Our results shows that there's a very rich energy and spin valley level structure for these excitations. They are rather high-energy phenomena because the binding energy is in the order of a few tens of non-electron volts. The results are recently experimental observations and in fact give some new predictions. So for example, for the bi exciton in this system, traditionally it has been thought as an exciton, which is a combination of a intra-valley bright exciton and an intra-valley dark exciton. However, from our calculation what we find that's really not the case. The bi exciton that was measured here, in fact, is an exciton, bi exciton that combining an intra-valley bright exciton and an intra-valley dark exciton. And this kind of configuration was ignored previously because of the neglect of important electron hole exchange, the effect that come in on all those diagrams, 36 diagrams. Now, let me turn to time-dependent and nonlinear phenomena. Here, we have to look at non-equilibrium grid function and solve it in the time domain using the CalDIS control. For the experts in the audience the method we use is one we developed with a time-dependent GW that is within the adiabatic approximation for the self-energy and propagating it in real time. With the density matrix approach. Now, the advantage of this approach is that it makes the calculation much more doable. It captures the self-energy and excitonic effects. We expected it to be relatively accurate in the weak and moderate field regime. Within linear response, it is equivalent to the beta-soprita equation approach in which we test that, in fact, is the case. However, if you do time propagation, then you get our nonlinear responses and time-resolved phenomena including excitonic effects. And I will show some example of this. Because of this approximation, we can now perform the time propagation with a single time variable as opposed to time to the third power in the full Katerynov Bay equation approach. So, let me use shift current as an example. Shift currents are photo-induced DC current in a non-centrosymmetric crystal without having a PN junction. It is a second-order optical response that is with an incoming AC field, you could generate a DC current, and it is related to band topology, that is the shift of the inter-shell corner of the electron from the optical excitations. Now, previously one could do an independent inter-band transition type of theory for this kind of effect, but for 2D materials excitons are so important that we decided to use this time-dependent GW approach to look at the problem. So, here is some results we obtained for shift currents in monolayer germanium sulfide. Plotted here is one of the coefficients here that relates the optical field to the DC current, sigma yxx. So, the blue curve is when we turn off electron-hole interaction in the time-dependent GW approach, and then the red curve is when we include the electron-hole interaction. So, you see that there are dramatic changes in the spectrum. Exotonic effect in this material enhance the shift current at certain frequencies by orders of magnitude and completely changes the shape of the spectrum. This is quite interesting because exotonic effect is so important in 2D materials. One would expect that this kind of effect to be very large in any kind of 2D semiconductor, and in fact, we also find that for nonlinear response such as some frequency or this harmonic generations, you can also get auto-magnitude enhancement. Another very interesting result is that that you get DC charge transport with illumination inside frequency, inside the gaps. So this phenomena would be extraordinarily interesting and important in possible photovoltaic applications. Here's an example of second harmonic generation for the same system. What's plotted is chi 2, and again, the blue curve is without including electron-hole interaction, and the red curve is including exotonic effects. So again, you see that there can be orders of magnitude enhancement in the nonlinear optical response. Now, let me turn to field-driven phenomena within time-dependent operas. Normally, you think of probing exotons by optical means, but recently, we and others have realized that you could also probe them using time-dependent operas. The idea here is that you excite the light with a pump, optical photon, so you put the system in a non-ecoribidium state, and then later you probe it with operas. So here's an example. For the case, again, monolayer germinium sulfide. What we do here is to simulate this pump-hole experiment by shining light first with a frequency that corresponds to the permanent first exciton, excitation energy of the system, and then we probe this spectral function after a certain delay time. So on the left here is the operas spectrum when we neglect electron-hole interaction in the calculation. So what you see is as expected, these are the occupied bands, and you also see some of the unoccupied conduction bands because the pump photons. When we turn on the electron-hole interaction, then what you see is this spectrum, which is quite different. You see some novel features now in the unoccupied state energy regime, and in fact, you see several, oops, wrong direction. Several series of features, new features, and those are excitonic features, and by reading out the energy precision, you get out the exciton, excitation energy, and the spread of the intensity here, in fact, give you information on the wave function of the exciton in case space. That is, you could think of an exciton as a linear combination of free electron-holes, and this intensity here, in fact, is related to this coefficient here. And this kind of experiment, in fact, has been, is now doable in the last few years, and people have seen that experimentally. So by doing this kind of calculation, you could directly get out the exciton energy and its wave function. Another interesting feature is that you could see renormalization of the bands due to photon excitation, and if you increase the intensity of the pump light, you could see more dramatic renormalization. And this is given by this example here, looking at the same time-dependent opus for the case of a monodermal disulfide. The A exciton, the permanent exciton in this system is at 1.8 electron-hole. Here, we pump the system with frequency, photon frequency, at 1.7 electron-hole, which is slightly off-resonance, and you do see this excitonic feature, and this gives you the exciton excitation energy, and not much happened at the top of the valence band. But if we now do the same simulation and use frequency exactly at resonance, then you see again this excitonic feature, but the dispersion now changes in the top of the valence band. In fact, it's also changed dramatically, and this kind of camelback feature that you see both on the top of the valence band, and that of the excitonic features, in fact, is a exciton polarization driven exciton 4k effect in this system. Now, let me talk to you about another very interesting optical phenomena in 2D materials. Now, we look at bilayer system. Here is, for the case of molititums, selenide, and tungsten selenide, and what happened here is experimentally, what one finds is that your photo-excite layer, two-layer layer A, then you actually could influence the optical response of layer B in a very, very quick time scale. So the time scale, in fact, is in the order of hundreds of femtoseconds, and traditionally in the literature, this has to be interpreted as a single particle transfer, a charge transfer mechanism. That is, when you do this excitation, you create holes here, and this two systems form the type 2 band-alignment system. The hole gets transferred here, and then when you try to probe layer B, that will be influenced. And it turns out that if you do a DFT calculation, the coupling between the top of the valence band of layer A and B are actually very, very weak. And it cannot give you this fast-time scale, and you have to invoke phonons. But even with phonons, it's still quite slow. And this got us to think that maybe exotonic effects is important, because exoton is so dominant in those two systems. So what we did is to apply this kind of, okay, to this system, the TDAGW method. And the reason is that if you create an excitation in layer A, then there are actually a continuum, a bunch of interlayer exotons that are in resonance with the excitation in one of the layers. So you could quickly transfer intralayer exotons to an interlayer exoton, and that can lead to a change in the optical response of the unpumped layer. And I won't have time to go into the details, but basically, by calculating the time propagation of the Grinch function, you could get out the optical response. And when we do that, what you find is the diagonal element of the density matrix, which corresponds to the occupation of the single particle state, has a very slow time scale as a function of time after the pump. But the electron hole coherence of diagonal elements of the density matrix, which describes the transfer of an interlayer exoton to the interlayer exoton, actually go through a very fast time scale. And if you look at directly what the calculated change in the absorption of the system, that is you pump layer A and then you measure layer B, then what you find is that there is a very fast change in the absorption when you include the full electron hole interaction effects. And if you do the same calculation within TD heart-treat, then you don't find any change at all in this time scale. And the rise time that we calculated from our approach is in the order of 300 femtoseconds, which agrees quite well with experiment about 200 femtoseconds. So this is an alternative explanation in addition to the phonon charge transfer, induced charge transfer mechanism. Now let me spend the next three minutes maybe to talk about electron phonon coupling. So as we all know, most of the calculation for electron phonon nowadays are done within DFT, which are basically describing the coupling of the Kong-Sham electrons with the phonon field. This had worked very well for many materials, but it failed for some. And what we have done recently is to develop a method to increase self-energy effects at the GW level to the electron phonon coupling using a linear response theory similar to the density functional perturbation theory. So basically what you do is you, for the electron phonon matrix experiment, you take away the turn that involved in Vxc and replace that one turn involved with the self-energy within linear response. And what you find is in fact that correlation effects, self-energy effects, can change G by as large as a factor two in some materials. So here's some results we have obtained for some metals, sorry metals, series of metals. For simple normal metals, EFPT and GWPT basically give you similar results. However, for some metal oxides like this one, you could have a difference of a factor of two in the coupling constant lambda. And this change enhancement in lambda by factor two, in fact, explains the very large TC of this bismuth potassium, barium bismuth potassium material and whereas DFT cannot explain TC at all. And similarly, self-energy enhancement of the electron phonon coupling could also explain the king that one sees in the photo emission experiment of the couplet plate. And that enhances the king by a factor of three. In terms of a very recent example in the case of superconductivity in the neck plate, and this is a system people are very interested in because it's an analog of the couplet plate. And if one can make it and make it a superconductor, then you might find another system with unconventional superconductivity. However, while this has been done very recently in the last four years or so, three years, and what's fine is TC in this material is quite high, 23 degree. But if you do DFT and the BCS type of phonon induce the superconductivity, the predicted lambda is 0.2 and TC is much less than 1. I see that the chairman is getting nervous. So basically, I just give you a very brief version of what happened here. What happened is very interesting. It turns out the neck plate is very different from the couplet plates. It's a two-band system. It has the nickel D band near the fermion energy, but also there's a rare earth band near the fermion energy. So the band structure within DFT is given by here. If you do a GW calculation, those two bands swap in position and changes the fermion surface dramatically and also the electron phonon coupling constant. And here is the LLS equation calculated with DFT and also with this GWPT. And what you see is a factor of 5.5 enhancement in the coupling constant. And this comes from the fact that you introduce a rare earth band in the system, and that rare earth band gives you a huge lambda and you have a bimodal distribution of lambda. And then we solve the k-dependent LLS equation and find out a predicted a very interesting superconductivity in this material. And we're actually sticking our neck out now. We're making this prediction that this material is a two-gap S-way superconductor. Here is a distributional gap at low temperature. For the rare earth band, it's very high, the gap, but for the nickel band, it's very low. And in fact, the gap distribution of a functional temperature is given here, and we have a TC about 22 degrees, which is very close to the experiment. And we also, so it's very, very different from the cube plate. And our results could also explain the tunneling experiment data. The tunneling data for the clean sample give you something like that, clearly shows two gaps. And then for other dirty samples, we have the v-shape and people interpret that this v-shape is being d-way. But it turns out you take this two-gap model and broaden it with impurity, scattering effects, then you get the v-shape. So I think over time, let me leave you with this message saying that many body prohibition theory with screen-column interaction to the n-particle-greens function can be a powerful versatile approach for understanding and prediction of a variety of spectroscopic properties and field-driven time-dependent phenomena for real material, but you have to treat the interaction appropriately for appropriate phenomena that you're interested in. So as the very last slide, I want to acknowledge my excellent postdoc and student who did all the work, and I leave you look at this slide. Thank you very much for your attention. Thank you very much for this fascinating talk. So we have time for a few questions. Yes. Yeah, you have this two-gap, as you said, but the symmetry of both the gaps according to you are a swift, right? That's correct. But if I understand correctly, one of the band is actually coming from nickel x-square minus y-square, so the expectation was that will be d-wave. So do you have any comments on it? Of course you say that the v-shape comes from the disorder and etc. Right. What happened is that according to our calculation, there's no opposite experiment yet, and there's no real detail of tunneling with a phase experiment measurement. So according to our calculation, superconductivity is driven by the rare earth band, and that has a very strong electron-phonal coupling due to the fact that the self-energy effects bring it to the family level, family surface, that band, and also enhances the electron-phonal coupling. So the driving force is the rare earth band, and that's why I give you a two-gap s-wave and large Tc. So all these predictions remain to be verified by experiment. Thank you. Thank you. And thank you very much also for my side. It's a very beautiful talk and very, very clear. Very nice. Let me continue on the news question, because I was also curious about the doped neodymium nickelate. If I remember correctly, experimentally, the whole coefficient changes sign as a function of temperature, right? Suggesting that the carriers actually change from electron-like to hole-like. Do you have a view on this in this context? We have not looked into that very carefully, but there are many pockets in the system, there are electron and hole pockets and so on. So it might be reasonable using the kind of band structure we have with the GW approximation to analyze that, but we haven't done that. So question here. Thank you very nice talk. Thank you for your very nice talk. I have a question about your correlation effect of the shift current. It's a germanium sulfide. I saw that the enhancement is okay, but there are some sign change if you include the exotonic impact. Do you know why? It's very strange. In general, there will be sign changes even in the independent particle picture of shift currents. Depending on the frequency of light, even with the same polarization, you could change the direction of the current, and that's related to the shift vector in the band structure, which is related to the very connections and so on. So that's not surprising because you see that even in the independent particle picture. Now, the two dominant peaks below the gap in the plot that I saw you change the sign, and that's related to the nature of the two excitons. One is the A exciton, the other is the B exciton. And again, if you change the polarization of the light, you could actually change the sign of the coefficient. And that's very exciting because one of the things that we are doing now is with Mike Cromby, where he could try to measure the current on 2D material and by changing the frequency of light, we may be able to see this. Well, I think we have to move on for the interest of time. Thank you very much against it. Our next speaker is Johannes Lischner from Imperial College, and he will tell us about large metallic nanoparticles. Thank you very much. And I also would like to start by thanking the organizing committee for this opportunity to present the research of my group on modeling the hot carrier generation in large metallic and nanoparticles. So this is a talk about so-called nanoplasmonic hot carriers where one exploits the capability of metallic nanoparticles such as a gold nanoparticle to convert the energy of light into energetic or hot electrons and holes. And what we mean by hot is typically a large energy compared to the thermal energy. And such hot carriers can be very useful for energy conversion applications such as photo catalysis, photovoltaics, also in sensing and there are many other applications. So here you see just one example where one basically uses a gold nanoparticle to convert light into holes and electrons. The holes can do an oxygen evolution reaction. The electrons can be transferred to a titanium nanoparticle and then do a hydrogen evolution reaction with basically what's happening is water is split into oxygen and hydrogen. So this is one of the ideas where these nanoplasmonic hot carriers can be very useful. So where does this ability of the metallic nanoparticles come from? And that has a lot to do with this concept of a localized surface plasma. So a localized surface plasma is a collective oscillations of all the conduction electrons inside the nanoparticle and it is responsible for the very large absorption cross sections that we find in metallic nanoparticles when the frequency of the light is resonant with this natural oscillation frequency of the electrons and you get a big peak in the absorption spectrum and very large light absorption. But what happens much like in a pendulum that sort of slowly loses energy due to various damping mechanisms is that also this collective oscillations of all the electrons is a damped oscillation and what is happening is that the energy is converted from the collective motion of the electrons into single whole pairs by something that is known as the land-out damping mechanism and we can visualize it in the following way. So here we see sort of the electrons are sort of pushed down by the applied electric field of the light and that creates an internal field and this internal field in terms of gives rise to excitations of electron whole pairs which then constitute the stamping channel and just as sort of a concrete experimental example. So here is a work from from the Halus group at Rice University who demonstrated that they can use these hot carriers to dissociate hydrogen on a gold nanoparticle. So here is the gold nanoparticle and what's happening they claim is that the hydrogen molecule adsorbs to the surface and then hot electrons tunnel or hop onto the hydrogen and that then gives rise to the dissociation and this is a chemical reaction that is very difficult to induce by other means. Okay, so this is sort of the experimental motivation. Now of course what I would like to do is develop a way to model these kind of hot carrier generation processes. So how do we go about doing that? So the first thing we do is we map this very complicated problem of interacting electrons onto a problem of non-interacting electrons moving in an effective potential which of course is very familiar to all of you but this effective potential is slightly different from what you're used to. So of course there will be a potential arising from the illumination. This is just the electric field from the light which polarizes the electrons in the nanoparticle and this polarization just rise to restoring force arising from electron-electron interactions and so this gives them rise to an induced electric field inside the material and we can calculate the total potential arising from these two contributions using electrodynamics and here for example for the special case of a spherical nanoparticle one can solve the Maxwell equations within the so-called quasi-static approximation analytically and gets a solution for the potential which is shown here which involves the dielectric function of the bulk material which we for simplicity just take from experiment. Okay so once we have the effective potential in which the electrons move we can then go ahead and calculate the rate at which hot carriers are generated simply by evaluating Fermi's golden rule which is shown here so here we calculate the rate of electrons with energy E that are generated by photons of frequency omega and this has all the usual ingredients there's a matrix element squared of the effective potential an energy conserving delta function some occupancy factors and there's some delta function which basically tests if the electron that is generated has the energy that we're interested in so fairly standard textbook stuff. Now to evaluate this expression all we need of course are wave function and energies of electronic states in a nanoparticle and to get those of course what we need to do is do some quantum mechanics and solve the Schrodinger equation which is shown up here which means we have to diagonalize a Hamiltonian which typically scales like the third power of the linear dimension of the Hamiltonian but the problem that we're running into very quickly is that the linear extension of the Hamiltonian the size of the Hamiltonian increases very rapidly with the size of the nanoparticle so if you do you know standard atomistic approaches like ab initio density functional theory you can get to nanoparticles with maybe a few thousand atoms maybe ten thousand atoms but you know experimentally relevant nanoparticles in this field of nanoplasmonics typically contain millions of atoms so there's a big gap between what we can simulate and what's being what's being relevant to experimentalists and this is why many researchers in this field actually use non-atomistic approaches so gelium or spherical well models which you can solve for much larger nanoparticles but these come of course with a lot of problems you don't really have material specific band structures no d bands you can't really describe the different facets that that the nanoparticle can have so so very a lot of limitations so there's a problem how can we bridge this gap between you know and model the electronic structure of large metallic nanoparticles and the way we we tackle this problem is with a set of methods which are which are known as a spectral method and I'll try to explain briefly what those are so we go back to to Fermi's golden rule which I'm just you know copied here exactly what I showed you a couple slides ago and we rewrite it a little bit so what we do is we insert two delta functions and we also add two integrals so we haven't done anything so nothing's changed but now we can define this function phi here which is sort of a sum over all the transitions some matrix elements and some delta functions and this function phi contains all the information about the electronic structure of the nanoparticle the rest is just delta functions and occupancy factors now phi we can now expand in terms of a series of chevy chev polinomials so it will look like this so there's two two energy arguments so we'll get sort of two chevy chev polina products of two chevy chev polinomials and there'll be sort of some coefficients mu mn which will be frequency dependent and it turns out that we can write these coefficients in the following way as a trace of four operators two of these operators are chevy chev polinomials of the Hamiltonian and the other two operators are the effective potentials so this is what we need to evaluate and to do that we use a fairly well known trick where we approximate the trace by the expectation value of a random vector which becomes more and more accurate as the nanoparticle gets bigger and bigger all right so we need to evaluate this expectation value of these four operators between a random vector how can how can this be done so to do this we basically exploit the recursion relation of the chevy chev polinomials and we define this vector vn which is the random vector acted upon by the potential and the chevy chev polinomial of the Hamiltonian and if we have vn then the coefficient that we want to calculate is just the overlap of vn and vm and so here is basically how we calculate these vectors and this is just using the recurrence recursion relation of the chevy chev Hamiltonians so what we need to do then to generate these coefficients is basically to act repeatedly on the random vector with the Hamiltonian and this is very efficient if the Hamiltonian is sparse so what we do then is to use a tight binding with hopping integrals fitted to ab initio band structures um and we use a model where we have nine orbitals per atom so for the case if we want to model a silver nanoparticle it'll be five four d orbitals one five s and the three five p orbitals and that will give us a relatively sparse representation and this then allows us to do nano to model hot carrier generation rates and nanoparticles containing millions of atoms so let me show you some results that we can obtain with this method the simplest kind of system that we can look at are spherical nanoparticles so here we're looking at spherical gold nanoparticles and look at the size dependence of the hot carrier generation rates so here on the x-axis you have the electron energy zero is the Fermi level the blue curves are the the the generation rates of the holes and the red ones are the generation rates of the electrons and the different panels are the different diameters of the nanoparticles two four and eight and we're looking at light at the plasma frequency of a gold nanoparticle in vacuum this is the absorption spectrum here and we're looking at the peak and what we can see immediately is that for the smaller nanoparticles which contain a few thousand atoms there's a lot of peaks which arise from quantum size effects so what this tells us is the nanoparticles that we can typically model with ab initio methods suffer quite severely from these finite size effects and only if we go to bigger systems containing about 10 000 atoms or more do these you know peaky distributions merge into something relatively smooth how do we understand these more smooth distributions for the bigger nanoparticles uh for that we quickly need to remind ourselves of the band structure of gold which has sort of a set of uh bands with sp characters which are dispersive and cross the Fermi level and there's another set of less dispersive d-bands which start about two ed below the Fermi level so with this in mind we can now and try to understand these hot carrier generation curves so there's one set of peaks which are sort of indicated by these green arrows here which arise from interband transitions which are transitions from a d-band to an sp band and and you can see that these interband transitions generate very energetic holes but the electrons are very close to the Fermi level which means they're not very energetic but there's another set of peaks which are indicated by these orange arrows and these arise from what's called what's known as intra band transitions so these are transitions starting in the sp band and ending up in the sp band and these transitions are forbidden in the bulk but in a nanoparticle because you have a surface that breaks translational invariance they are allowed and they give rise to energetic electrons but relatively un energetic un energetic holes we can also look at the frequency dependence so here we're just flexing our muscle a little bit and look at a nanoparticle that now contains more than a million atoms and we can basically see as we have more energy available and higher frequency photons we can excite sort of a broader range of hot electrons and hot holes we can also look at other materials so here we're looking at a silver a nanoparticle and because the d-bands and silver are further from the Fermi level than a goal you can see that the peak from the interband transitions is as far away from the Fermi level but otherwise it's it's quite similar okay so far I have told you about spherical nanoparticles but one of the sort of cool thing about the nanoscale is that you can make nanoparticles in many different shapes so here I'm showing you some experimental images from my collaborator Emiliano Cortes in Munich where he's prepared nanocubes of gold rhombic dotted cathedra and octahedra of gold and what is interesting is that these different shapes also expose different facets of the so you can see that the nanocubes expose a 200 facet the rhombic dotted cathedra expose a 110 facet and the octahedra expose a 111 facet so there's a close connection between nanoparticle shape and the exposed facet and then what they did is they use these nanoparticles to reduce carbon dioxide electrocatalytically so here's the experimental setup it's quite complicated but we don't need to understand all the ins and outs so what they do is they feed some co2 into the chamber this is our working electrode where the gold nanoparticles are located and then they basically convert the co2 with some electrons into co and then the co is sort of analyzed in a gas chromatograph and everything is connected essentially to a battery and they can do these experiment with a lamp on or in the dark so first let's look at what happens in the dark so on the x-axis here is the applied potential and on the y-axis is essentially the efficiency of co2 conversion and what they see is that the best conversion rate is for the rhombic dotted cathedra while it's somewhat lower for the cubes and the octahedra and this is consistent with what's known both from experiments and and dft calculations that the smallest activation energy for co2 reduction is on the 110 surface okay but this is in the dark so now let's see what happens when they turn on the light and what they see are these red curves and basically you always see an enhancement but this enhancement is much much stronger for the cubes and the octahedra and only relatively little for the rhombic dotted cathedra so this is something that we wanted to understand and what we did is we carried out some calculation where we created some models of cubes octahedra and the rhombic dotted cathedra each containing about 200 000 atoms now of course we can't solve the Maxwell equations analytically anymore as we could for the spherical nanoparticles so we use the finite element method as implemented in the commsol package and then we calculate the hot carrier generation rates for these different shapes and this is what we get yes so just by looking at these hot carrier generation rates it's very obvious that there's large generation rates for the cubes and the octahedra and significantly smaller generation rates for the dotted cathedra so this is consistent with the experimental findings and when we integrate these curves and look at the total number of hot electrons that are generated in these different systems again we see similar and large excitation rates for the octahedra and the cubes and much smaller generation rates in the rhombic dotted cathedra so this is basically consistent with experiments and if you want to understand where this difference is coming from this is essentially we're still working on sort of figuring out in detail but a big part of the story is the electric field enhancement that you get in the different shapes so here is our plots of the electric field enhancement in the different systems so each graph is a cut through a different slice of these cubes but the bottom line is you see a lot of red which corresponds to high electric fields for the for the nano cubes and also the dotted cathedra but much less red for the rhombic dotted cathedra so the electric field enhancement is much larger for the top and the bottom row which then translates into higher electron hot electron generation rates the last part of the story that I want to tell is to look at more complex more complicated architectures at the nano scale so beyond just looking at different shapes you can play even more complex games and try to create different nano architectures by combining two different materials in particular one idea is to combine a plosmonic material like gold with a good catalyst like palladium and look and look at these systems for the conversion of formic acid into hydrogen in particular these guys in munich looked at four different architectures two of them are core shell nano particles where the gold is in the middle and it's surrounded by a thin shell of the catalyst and either it's the pure catalyst or an alloy of the of golden palladium or the second class of systems are these so-called satellite systems where they have a big gold nanoparticle that is decorated by little nanoparticles of the catalyst or in fact little little nanoparticles which themselves are core shell nanoparticles and then they play the same type of game and look at the enhancement of the chemical activity in the light and they see the biggest enhancement for the satellite systems and much smaller enhancement for the core shell nanoparticle so again this is something that we wanted to study and so we we looked at these four different systems here just a plain palladium nanosphere, a gold palladium core shell nanoparticle, a satellite system and then a satellite where the small nanoparticle is itself a core shell system and we modeled these again we calculated the quasi-static potential using the finite element method for the full system consisting of both the palladium and the gold part but then for the hot carrier generation part we only evaluated the hot carriers that are generated in the palladium and this is our result so it's very obvious that the largest generation rates are obtained for the two satellite systems so the biggest generation rate is where the satellite is itself a core shell nanoparticle the second one is the simple satellite systems and the pure palladium nanoparticle and the core shell are down there so much much smaller and again this is at least this trend is consistent with what they see for the experimental enhancements when they turn on the light and again we can ask ourselves where are these enhancements coming from and again a large part of the story is electric field enhancements because you get very large electric fields when you have small gaps so here you have in the in one satellite you have a small gap between the the palladium nanoparticle and the gold nanoparticle and for this other system you could say that the palladium is sort of in a gap between two sort of gold antennas which leads to an even larger electric field enhancement which again then translates into high hot carrier generation rates all right so this was all I wanted to say so I've shown you a new atomistic approach that allows us to look at hot carrier generation in large in large metallic nanoparticles and I applied this method to three different systems the first one were spherical nanoparticles of gold and silver where we looked at the competition of interband and intraband transitions as function of the nanoparticle size then we looked at the effect of the shape of gold nanoparticle on carbon dioxide reduction and finally at hot carrier generation in these gold palladium nano architectures for hydrogen production finally I want to acknowledge the team that's done the work most of it is done by Hanwen Jin who is an excellent PhD student and imperial and I also want to thank you for your attention thank you thank you very much Johannes for this very interesting talk I'm sure there are questions hi Johannes thanks for the nice talk I have two questions in fact first one is that with your methods you don't really see the plasmon right so in the sense that you see single particle excitations you don't see the many body excitation that is associated with the plasmonic effects and the second question is when you have the satellite that is a core shell structure by itself is this stable so is a so you you already as that your platinum palladium is covering all the gold structure is this through the descales or do you you have a separation of something like Janos particles that are you know how is the coverage in this case okay let me answer your second question first I mean these are all system these are all basically motivated by the experiments right so so we haven't studied so we were assuming that they're stable because the experimentalist tell us just as a word of sort of so what's actually happening in the real world is that these nanoparticles are covered by ligands and so these ligands are very important in stabilizing the nanoparticles but I mean thermodynamic stability is a problem so for example the experimentalist actually wanted one set out to make alloy satellites like they did the core shell satellites but what ended up happening is that the electron that the satellite nanoparticles ended up being core shell type so it's a complicated question that that we're not that we're not answering to answer your to go into your first question you're right that we evaluate Fermi's golden rule which doesn't capture electron electron interactions but the perturbation that we use is from a solution of Maxwell's equation and Maxwell's equation does capture the plasmon peak so in that sense we do have the electric field of the plasmon as input into our Fermi's golden rule so in that sense we do capture the plasmon if you know what I mean thank you any other question hi the chemical reaction takes place on the surface so whereas you're computing the carry generation in all your object uh what are we diffusion so so is this a realistic uh thing so should you consider that only part of electron generated participate to the chemical chemistry at the surface uh that's a really good question um so at this point our method gives us the full hot carrier generation rate and doesn't give us spatial resolution um the thing that I do know is that um intra band transitions are enabled by the surface so they only happen at the surface inter band transitions happen everywhere so we know that we already know that intra band transition electrons are rising from intra band transitions must be at the surface but of course you raise a very very good point and that is the point of relaxation and diffusion and all of these important effects which uh which are which are not included this is just hot carrier excitation and there's a lot of discussion about how quickly relaxation happens and you know do they relax before they can participate in chemical reactions all of that all of that isn't isn't included yet but for future work definitely anything else I think so my question is about the shape dependent um so um does it mean does it mean that the field enhancement depends basically on how sharp the corners are of the the the more roundish the shape the less enhancement you have hey exactly in a nutshell that's okay if you if you have like a lot of people are interested in sort of nano starts very spiky things and then you have lots of lots of people okay I think we have to move on again thank you very much for this talk and uh so a previous session that was on time dependent phenomena to another one on Harvard U treatment in these materials simulations and so we start with the talk this session will bridge over the over the coffee break so the first talk will be now and it is by young woo son from the korean institute of advanced studies and he was going to tell us of efficient ab initio methods to extend for extended habits interactions okay uh first of all I thank organizers for giving me a chance to present uh my groups their recent uh efforts to implement it uh extended how about the interactions in the phantom espresso so uh this is my post up then who uh contributed this project so it's doctor sound need this to remove back mood to the samsung and uh dr will young and other other post up here okay so I don't need the introduction here for the disc community because everybody knows that you know it's uh actually lda or pve doesn't uh uh kind of uh designed to give a uh uh quasi particle energy gap even though if you practically people many people computed uh gap for the disc uh semi local local local approximation for the exchange correlation that for the uh prototypical covalent material or zinc oxide or two-dimensional multidisciplinary or even crepe that uh all key uh group uh quasi particle uh all gap or its group velocity is uh is not uh agree where we uh do not agree where we the experimental barrier and moreover sometimes it's very qualitatively not good in some sense that indium arsenide actually if you computed the fd gga it's a topological insulator but it's just a typical normal insulator if you compute it uh including the correlation effect very well also put that cobalt oxide you need the u to compute the energy gap and moreover for some complicated systems such as molecular junction if you use uh land power formula or non-equivalent green function technique if you do not include the cyber interaction correction that actually the lda current you know almost uh uh kind of uh if you you don't if you should not have a current in some sense but you have a lots of current here so kind of qualitative error so uh there are many uh kind of uh apples to overcome that in the Hamiltonian uh level so for example uh Steve already introduced that this kind of uh diagrammatic approaches of jg double approximation and the local self-intention error correction through the u including the u or hybrid positional and dynamic mean field theory and so and so forth so there's a lots of effort so uh the one thing i want to address in this talk is that all these things are very good but for example if you want to do a many calculation in a very fast uh then for example high throughput walks or database driven research openly this kind of uh accurate computational method is quite expensive and very hard to do that because of the a lot of resources you need more over for example the hybrid function or mvj lda uh functional method actually has some intrinsic errors when there is kind of uh uh some specific system that was not uh designed well for the application of this function error and also the u value actually uh sometimes people use an empirical uh correction but you know if you do a lot of uh database or large uh system actually you need some way to calculate it uh u value right so this is a topic so how to implement it local and non-local self interaction and as correction sorry self interaction error correction without any impure season with very fast calculation uh simultaneously achieving the kind of uh accuracy comparable to the gw okay so uh this is how about Hamiltonian then this is a do that a good form of the DFTU formulation then this is the tensed matrices projected on the some localized orbital here is the i th atom is m m prime so to compute the the some empirical values and also linear response theory developed by this uh uh koko chiyon and zero curly uh zero curly long time ago and also and the momentum space uh recently uh uh team rope and company uh developed the dense function perturbation way is a momentum space formalism to compute the u and also there is another way is that the kind of evaluating the u value uh from the unrestricted heart repop formalism so actually we focus on the the last one is developed by recently in this agabito and the other co-workers because uh as i mentioned before the original purpose of this work is to compute it uh very uh many system with very fast time so we want to compute the u value within a self-consistent loop we don't want to do the other another additional computational routine so we think that this is good so let me introduce a little bit about that so uh i think actually original idea was developed by mosey and carter uh uh in 2007 this piv paper that they look at the molecular system that actually the u and j or hobart Hamiltonian actually can be think of what kind of a projected pairwise heart repop interaction uh computed by uh some other functionality a functional so uh hcv and zero functional is nothing but an extension of this molecular system ideas to the block wave function so so this is heart repop representation so here this is just the bare electron repulsion integral so this is just a bare interruption for the given uh one atoms for i here then m is a different orbital so so inter-orbital uh prolonged repulsion representation and this p is a key idea here so when you uh projected your wave function is a density matrix uh to the some submanifold of this m and n prime orbiters you scale this with this uh value of n what is n this n is uh just the weight of the total blow wave function uh so projected total blow wave function to the the interested molecular orbital so when this is becomes to be zero that the expression becomes to be uh lda or pbe when it becomes to be one it's just a heart repop so if you do some calculation that it should have some value it depends on your system and also this is also position dependent because it's the index of the eye okay so this is the renormalized occupancies control this self-energy or u-value of system so if you equate of this expression to the uh how about the Hamiltonian that actually you end up with some kind of uh hybrid type expression so do you have a better column interaction with the weight here for the u and j okay this is nice so uh this but this is still dft plus u formalism so you can computed within a self-consistent rule uh with uh unrestricted heart attack uh formalism so this even though this is very nice when you calculate the u within your self-consistent loops this still has a lot of problem that dft plus u has for example so the you know semiconductor is it you cannot describe very well and also the many metal oxide compound is also over localization so electron get to the uh strongly localized of the orbiters and also if you have strong PD hybridization that uh it's very hard to compute it and also there should be no local correlation you cannot deal with it so uh if you decompose a full-length interaction so this is very general expression then uh let's focus on the single orbiter uh problem here so we drop the all orbital index so just the site index and ikr then you know everything is the same then it goes to the Harvard interaction then actually the this is the uh hierarchy of size then then if ik is equal and jr is equal both both are not equal to together then this is uh inter-side of the interaction then the next one is correlated hoping through the bond charge then the fourth one is peromagneticism right so this is a decomposition of colloid interaction so uh it's uh almost 10 years ago uh Coco Chihon is developed the you know this kind of uh including inter-side of an interaction within a DFT formalism so everything is ready so this is DFT plus U formalism then you include the B term actually uh you end up with this then actually this is uh just uh like uh you then fully localized limit consideration that this is just a double counting for the DFT so we just uh deleted that this is final expression for the DFT plus U plus V formalism in a density representation so you can immediately notice that this is very interesting because as already very well known that this U uh functional actually force the the integer feeling for the specific localized orbital for the eye right however this inter-side orbital actually pushed away from the electron from localized side actually to the bond side so this is controlling the bond interaction so this is actually competing as we expected right so from this formalism okay so we now combine those two ideas together so how you can do that so inter-side of our uh interaction as I showed before this is already written in the Coco Chihon the paper then this uh ij means that uh this is the we just consider the pair of atom i and j located different position within a given cutoff so this is only parameters in our formalism so you can set up the how much pairs you do you need so and this is nothing but uh kind of a dense matrix so you we use the reading also normalized projector uh yeah you can use anything you want then we write down the ACV and zero right expression for the inter-side interaction so because of uh kind of double counting is that one port and here is the generalized projector for the display interaction looks like that then this is just a cool long interaction to to accelerate the computing time we just using the three gaussian feeding for the given projectors so it's just almost analytical so it's very quick if you have a given position and orbiters and this one is the controlling this is a decisive one so this is a project generalized pairwise dense matrices then this n value here looks like that so this n is our uh hypothesis or unjust so this renormalized occupation actually controlled the inter-side interaction we consider that there is many way but there is no justification this is my unjust so we consider that uh inter-side current interaction uh will be controlled by the charges located both sides as well as charges located in between them so we're just adding up the projected density for the located for the ice electron and jet electron okay so if this goes to the zero then all calculation goes to the lda or pbe original exchange correlation functional if it goes to one it's called the the hot reply okay then using this all this expression that equating everything uh you end up with a kind of a pseudo hybrid functional for the inter-side current interaction so we implemented this one is quantum express of seven zero version and in our home institution said n looks very simple actually if you if you have a dense matrix and projected pairwise n value then you can compute it in the within your self consistent loops so actually the computational time compared with the pbe it doesn't increase much it's it's the same because every density matrix is solved within your self consistent loops right so so you kind of converged this matrix and these function are simultaneously in your self consistent loop then all calculations have been done uh within a very short time so this is the result so we computed this 20 23 different three-dimensional solid from a simple cobalt semiconductor and two five three five and two two six and zinc compound ionic compound some oxide so this is theory and experiment uh curve so this is uh line then this is pbe and this is hsc and this is gw so gw is a little bit overshoot and hsc is a little bit under the line and pbe is quite below the line and this is our one so you compared everything together then actually we are on a very satisfactory in the sense that uh so for example this is uh uh little fluoride so gw is here and the hsc and this is pbe and our one is here and then magnation upside no method uh algorithmic experiment so magnation upside is very notorious very difficult then any other things are like that okay so this is a projector you need a projector so we use a projector uh inside in the pseudo potential so we just extracted the ready and also normalize uh project uh orbiters within the pseudo potential so there is a kind of dependency on pseudo potential we tested two databases not so much but there are dependencies then uh band gap only very well and the important thing is that it's very fast okay so this is some example so the silicon is about the 116 percent and zinc oxide is almost already very well as I mentioned the lithium fluoride is kind of 100 percent and moreover graphene group velocity is almost the same as gw or experiment and moreover this is important this is a low dimensional material for example it's a black phosphorus so black phosphorus as you see that this is a phosphorine has corrugated uh structure so there are strong mixing between s and p orbital here then and the low dimension is two dimension this is number of layer and this is computed energy gap and gga is quite uh off from the experiment so this is a quasi particle gap for the single layer then and as you see that this is hybrid function hsc is 06 and this is dft plus u on a p orbital okay so this is ac ben 0 and this yellow triangle is mb j lda so as you can see that all those are actually on top of the kind of exotonic gap so this is bsc calculation then uh gw 0 calculation is this and gw 0 calculation is this so the hour one is blue or dark is on just on top of the gw 0 calculation so i'm sorry so so it's okay so it's a low dimensional system actually the gw calculation you need a very high cutoff uh there is now there's some nice technique developed by steve's world accelerated calculation but essentially our calculation computation time is the same as the pbe and moreover then is there well known uh well pointed out the manish and steve's paper in prl in the 10 years ago that hybrid functioner is not good yeah not good for this kind of reconstructed system then this silicon one on one two by one construction that bulk semiconducting gap and there is a one-dimensional chain this is also semiconductor so this is uh very old r-press and inverse r-press data and this line is our calculation so uh our calculation also agree very well is gw's calculation because in our calculation intersite her body interaction and u interaction actually changes depending on the position because it reflects the local screening through the number variation of the projected density p and the magnet moment is also good okay so okay uh for silicon actually our one is the 100 times faster than hsc so and also you don't need the resources you can do it in the laptop then uh any other things yeah because this is the function error the single particle function error you can also do the uh kind of opponent calculation as steve introduced very well then his achievement is very nice one is the gw df pt so actually we think okay this one is uh recently people consider the correlation effect on the ponon for example this is nickel oxide lda you know optical ponon is this dot is the experiment and this pure the circle is computed by lda it's quite different than the sabra supplement cotelia show that if you include the dmft calculation dw is very well and also steve introduced that this chart is for for national system for example barium bismuth oxide you need the gw so as i mentioned before it's very nice but it needs a lot of resources so uh our one is just the hover the interaction so it's the onsite and intersite so you're just using the chain rule then you can compute it the force directly uh fortunately the next speaker and team look developed the prulae forces for this uh uh density variation so we're just using it and the the other one is very straightforward to compute so this is the result we first tested uh silicon uh diamond and germanium you can see that this is uh uh what's that let this constant variation with respect to uh deviation with respect to the experiment so this is percent you can see that lda is tends to be smaller than experiment and pbe is expansion and pbe sore is okay and hsc and this and our one is this so lda pbe is era is about one one to two percent and pbe sore and hsc and our one is plus minus point five for the here then bulk modulus it's more dramatic as you can see that uh all those things are very bad that including pbe sore then hsc and ours is about five percent and this is optical pendant then you can see that the uh this is error with respect to the experiment then this is a functional for the exchange correlation and this is a symmetric point of optical pendant that it's okay but silicon is our one is plus minus one point one point five and moreover actually this is much better this is greenizing parameters then uh this line is our one and dot is the experiment so the mode dependent greenizing parameters our calculation is very nice so a lot others so nickel oxide then you can see that pbe is uh this dot is the experiment so usually as i mentioned before the oxide if you include the dft plus u is that considered determined u typically over localized so the optical uh frequency is very hard compared to hardened compared with the experiment so you can see that this one's upshifted then this uh intersite interaction is a counter balancing of that that then you can see that uh our calculation is uh very very well with the experiment it's very comparable to actually more than the dft and this is baron viscous oxide system because of time limitation uh this is summary so this study the experiment for the dope system and uh blue one is pbe and red one is our calculation the key critical pendant mode is m and r as treating as treating mode so as a function of the doping it becomes to be charged this way to the superconducting transition at the doping about 40 percent of the potassium here so this line this is doping of potassium here and this is a summary of frequency of those two poor critical pendant mode so the negative means that it's a cdw state and becomes to be hardened because it becomes a superconductor so if you compute the gga then all those phonons still are negative means that still and cdw phase then if you compute it including plus u for the all orbiters necessarily that it predicts that the there should be cdw to superconductor transition around 30 percent and this is our calculation if you're including the b that our calculations exactly match the experimental observation okay so let me conclude we developed and implemented the extended how about functional formalism in the quantum espresso and this can be done in the within a self-considered loop so the time consuming the calculation is exactly almost the same not exactly same as semi local approximation for the exchange correlation then everything you can do with the dft plus u or semi local approximation actually you can do with these formalism thank you thank you very much that was really impressive results you have here any questions okay whatever uh yeah thank you very much uh i actually have two questions uh the first question is about your double counting correction uh yeah that as you know as a practitioner it's a very tricky thing i think you use the fully localized limit uh does it work in every case actually we do not test it very well about that uh approximation we just using it then the tested every other's then there are some errors as you can see that there i don't know whether error comes from that bill then approximation or the other there are so many other answers there so which one contributed error we don't test it rigorously but as you you crack this is not a fully localized limit right surely so we need to consider it more okay and the second question is do you understand what is so uh you know really particular about magnesium oxide that nothing works is the nature of bonding or i don't know i don't know okay i don't know we just quickly surveyed every us only 23 is very quickly so we do not focus everything tells everything thank you very much for the very very interesting talk um and since the new share two questions i will also take two the second one will be very short um but the first one is maybe a bit more extensive and also maybe to the gw experts in the audience because i'm confused about one point um i understand that conventional methods to calculate u and also u and v actually uh have two ingredients one is coming essentially from matrix elements overlaps of different orbitals and representations and so on and that's important but there's a second one which is essentially assessing the screening and uh in other methods this is very explicit for example in the constrained random phase approximation techniques you can even choose the channels that you would like to screen but also for example in the methods that matthew koko choney put up that you cited you have this linear response actually essentially determining the screening the response of the system so now in in your technique i saw the matrix element and all the overlaps but i didn't understand how you include screening so i kind of for myself figured out that your scope is a bit different that you essentially are aiming to parameterize the heartary fork Hamiltonian which is a different thing than the u of lda plus u but then on the other hand your results should essentially reproduce the heartary fork which doesn't seem to be happening so i'm a bit puzzled about this yes and then the the second question is very short it's just a technical one when you talk about the nickel oxide and you're referring to the v do you mean the v between two nickel or the v between nickel and oxygen so is it's a ligand the second is very it's straight nickel and oxygen okay yes here v is the nickel and oxygen and and the first one is also is thank you very much we have very much thought about it very long time actually so this is a matrix element impact is a straightforward in this formalism the screening that actually as i show you before that we constructed kind of a function error form for the v it involves the kind of projected electron for the given uh uh atoms participating the intersects prolong interaction that we waited them proportional to the charge among within the unicellar that how much charge proportion participate in the bonding so we waited them with some proportion that's a kind of a kind of free factor for the heart effect so that's the reason why we go we do not go back to the heart effect because we always has kind of waiting factor with respect to the whole charges so it kind of is sensing in some sense of screening because every in the whole charge in within the unicellar how much charge for participating the bonding okay so that's uh that's still we do not know the answer but somehow it's very remarkably reproduced the gw result i actually frankly we still do not know the exact thing but the unjust it's just resuming of the all charges participating that uh procedure yeah that's all we know because this is unjust we just write down okay this may be better okay so so result looks like that so frankly speaking i don't know yeah yeah okay thank you is there any other quick question riko thanks for the nice talk um you compare your results with uh band structure calculation and did you also try optical response um uh yeah yeah that's a very nice question so so but you know the optical you need a uh electron-horror pair interactions as a better circuit equation as Steve already demonstrated so our contribution maybe you can bypass the gw but all the other coolant matrix elements you need to compute the electron-horror pair interactions so we maybe improved marginally the computational speed but we tested it i don't have it sorry because i converted it to the pdf file but we tested it and put silicon silicon and uh black phosphorus then our calculation is quite agree very well with gw better circuit equation okay thank you very much and uh well there's a quick question by francesco last time because i mean indeed the the critical things is this effective screening hc hc also has a with a very mixing parameter somehow a screen that is somehow hidden in the calculation a critical case to prove these is an heterostructure between the material with high electric constant the low electric constant and the binary alignment between the two hc these fail totally and gw taking account so probably this is the best test so you have to take an intelligent with a low yeah semi-conductor i get semi-conductor and see if you get the bad alignment of gw that is a probably okay thank you yeah thank you very nice comment actually we will try that's a list uh yeah that's a list okay thank you thank you very much okay thanks a lot we thank the speakers of this morning session again and now i remind you of course i remind you there is coffee but uh that needs no reminding but there is the the photograph for the workshop happening now and otherwise please come back here by 20 past the hour please the photograph will happen here so if you can stay please for the photograph so please all come down we will the photographer will will stay there we will be here down so okay here he is yeah in front and in the back so he will be above so he will see both lines so please so please form a further line in front those on the on the tails please come to the center we are too too broad so i promise this is the most difficult part of the workshop everything else thank you so coffee is in the entrance hall so and please for the speakers please come for the speaker of the next session please come down so we can test the the presentation Sa, sa, sa, test, test, test, sa, sa, test, test, test, sa, ok, I feel it well like this, I don't feel it at all, I feel it quite present, ok, thank you guys, wait up, test, test, test, hello, hello, hello, hello, hello, hello, hello, hello, hello, hello, hello, hello, hello, hello, ok maybe we can get started with the second part of the morning This is Nicola Marzari from EPFL in Lausanne and the Paul Scherer Institute. So I'll introduce the speaker of the second part of the previous session. So we have Yuri Timrov from EPFL on actual electronic properties and the intercalation voltages of lithium ion battery cathode materials from extended Hubbard functional. Yuri Flores, yours I give you a sign five minutes before. Thank you, Nicola, for the introduction. Good morning to everyone. So first of all, I would like to thank the organizers for having given me the opportunity to present my work today. And it's really my great pleasure and honor to present my work here. OK, so let's start with the self-interaction error or the localization error in approximate DFT. So let's consider the simplest case, the dissociation of the H2 plus molecule. So here on this figure, I show on the top the charge density distribution of an electron in green for the H2 plus molecule. So the interesting thing happens when we are actually pulling apart two hydrogen ions. So in approximate DFT, what you find is that the electronic charge density is split equally between two ions. So we have half an electron on the left and half of an electron on the right. And this is the only solution in DFT. But then if you solve the Schrodinger equation exactly for this problem, you will actually realize that there are an infinite number of solutions. One of the solutions is that you have one electron on the left and zero on the right. Or you have one electron on the right and zero on the left. So this is the so-called, and these solutions are actually orthogonal to each other and degenerate. So this is the so-called left-right degeneracy. And actually, linear combination of these two orthogonal solutions is also a good solution. So why in approximate DFT, we obtain only one solution where you have half an electron on the left and half on the right? And this is exactly because of so-called self-interaction errors. So and there is a link actually between this problem and the so-called spurious quadratic energy change when we are adding or removing a fraction of an electron to the system. So in this figure, I show the energy error as a function of the fractional charge for the H atom. So if we use Hartree-Fock, which is free from self-interaction errors, we will see that the error is 0. It's because of the linearity of the energy, because of the so-called piecewise linearity of the total energy with respect to the number of electrons. If instead we use approximate DFT with LDA or even hybrid functional B3D, we will see that actually energy changes in a quadratic fashion and so there is a deviation from the exact solution due to the localization error or self-interaction error. So electron is actually spread around and that's why we have half an electron on the left and half on the right. But actually if we come back to our problem on the left and actually we can, in approximate DFT, we can impose the piecewise linearity and by doing so we can restore the left, right degeneracy. And actually what we will obtain is that the energy cost of removing of an electron from the left ion is equal to the energy that we are gaining when we are putting the same fraction of charge on the right. And this allows us to restore the piecewise linearity. And so this problem of the H2 plus is actually of self-interactions in 2 plus is actually found also in the mixed valence materials. So if we go to the next slide, so if we consider this casted material, so the correct solution is actually that we have two types of irons, one in the oxidation state 2 plus and another in the oxidation state 3 plus. But if we use approximate DFT with LDA or GA functionals, what we will find is actually the average oxidation state of 2.5 plus. And this is exactly due to self-interaction errors of approximate DFT. So what happens is that the iron 3D electrons are delocalized and they're spread around, so you have the equal number of charge on all iron irons. And that's why we have 2.5 plus oxidation state. And we can make again a link with the energetics. So we have the spurious quadratic energy change when we are adding or removing a fraction of an electron to iron. So in this figure, we see the exact solution that must be piecewise linear is a function of the number of electrons. But when we use approximate DFT, we have some quadratic-like behavior which is wrong. Okay, now let's talk about DFT plus U because it's one of the methods to cure self-interaction errors. So DFT plus U was introduced in the early 90s by Anisimov and coworkers. And then later on Liechtenstein, Duderov and others introduced other formulations of DFT plus U. So the idea was just to take DFT on one hand, take Hubbard model on the other hand. So since Hubbard model was developed to work with strongly correlated materials, so people initially thought that actually DFT plus U is the method to deal with strong correlations. But later, people actually realized that what you actually cure with the plus U correction is not strong correlations but self-interactions like in the H2 plus molecule. So this is the energy expression for in DFT plus U. I want to stress that this is heuristic. There is no theorem or proof of this. And this Hubbard functional is actually not variational with respect to Hubbard U. So far there is no any theorem or proof. And in the previous talk, already Jung will explain about this functional. I don't want to spend much detail, but what we have, we have DFT total energy which is augmented with the Hubbard correction. And here N are the so-called occupation matrices which are defined by taking a constraint of a function psi and projecting them on the atom-centered localized functions phi. And F are some Fermi Dirac distributions. This is well-known. And in 2010, as Jung mentioned, Matteo Coccioni and Campo, they introduced an extension of DFT plus U. It's called DFT plus U plus V where we have not only the on-site Hubbard correction which favors integer occupations and that pushes your electron to be more localized, but you have a second correction, the inter-side Hubbard correction which actually favors hybridizations between transition metal and ligands. And so these two terms are actually competing with each other. So it's important to have the balance of the two to have the accurate description of the system. And this is just a picture showing that plus U wants to localize electron on the transition metal atom while plus V actually restores back the hybridization with, let's say, oxygen ions around. There are two strongly interconnected key aspects about DFT plus U and DFT plus U plus V. So on the one hand, we have Hubbard parameters, U and V, which are unknown. And on the other hand, we have this localized atom-centered function phi. So it's actually, one can wonder that they are independent objects, but actually it's completely the opposite. They are dependent on each other. So depending on which type of Hubbard projector functions you choose, you will need different values of Hubbard, U and V to get the correct energies. So let's talk more about Hubbard parameters. Already a lot was said in the previous talk, but I would like to summarize after the coffee break. So we don't know Hubbard parameters, so we need somehow to determine them. And there are two strategies. One is to use the semi-empirical values by tuning Hubbard U to reproduce some properties of interest, let's say, band gaps, magnetic moments, you name it. But for this, we really need to have experimental data as a reference. And on the other hand, we have first principles methods like CDFT, CRPA, Hartree-Fogg-based approach like ACBN0 and linear response. So in the previous talk, Yanggu nicely explained about the ACBN0 functional. And the topic of this talk is actually linear response theory. Hubbard parameters are not universal. What does this mean? So when we compute Hubbard parameters using ab initio methods, let's say linear response, we will find that Hubbard parameters are not unique. They depend on many properties. So one is, which Hubbard projectors do we use? So there can be atomic or octagonalized atomic, maximum localized of any functions. These are those localized functions highlighted in yellow. Then the value of Hubbard U depends on the oxidation state of the studized atom, in other words, on the pseudo potential, simply speaking. Also, the Hubbard U depends on the exchange collation function, albeit LDA or GGA. And also on the way you compute them, the self-consistency, there is one short way of computing Hubbard parameters or self-consistent. I will say more about this later in my talk. So how to compute Hubbard parameters within linear response theory? So let's start with the definition of the Hubbard energy. This is the Hubbard correction. For simplicity, I consider only one transition metal, so I don't have the summation of our atoms. And here N is computed by taking a trace of the matrix N, so it's just a number of electrons in the Hubbard manifold. Then we remind ourselves the problem, so the exact total energy must be piecewise linear is a function of the number of electrons in the Hubbard manifold, while approximate DFT gives quadratic behavior, which is what we don't like. So we don't like this curvature, so we simply can impose the condition that the second derivative of the total energy with respect to the number of electrons in the Hubbard manifold is zero. So this way allows us to get rid of the curvature and restore the piecewise linearity. So what we do is simply take the definition of the energy, compute the second derivative, it's a very simple task. And what you will readily obtain is that U is actually the second derivative of the DFT total energy, as simple as that. There is a more rigorous mathematical proof in this paper by Matteo Coccioni and Stefano DiGironcoli, and I would like to stress that there is no theorem for the piecewise linearity of the total energy in the Hubbard manifold, only with respect to the total number of electrons. So it's heuristic. How to use linear response theory in practice? We need to take our concept equation, we modify it a little bit by putting inside some small perturbation, so lambda is the strength of the perturbation and V is the perturbing potential, which is simply the projector on the localized manifold. And then we need to construct supercells and perturb atoms one at a time. Since we're using periodic boundary condition, the perturbed atom will interact with its periodic replicas, so that's why we need to use supercells large enough to get rid of this unphysical interaction between perturbed atom and its replicas. So we solve this modified equation for several values of lambda. This way we can measure the response of the system to the perturbation, which is the change of the occupations, delta N, and we divide it by delta lambda, which is the change in the strength. And this gives us the object which is called response matrix, chi. We take the inverse of this matrix, then we compute the chi not, which is the object analogous to chi, but chi's self-consistent value while chi not is the non-interacting or bare response. Then we take the difference of the two, we obtain a matrix. On the diagonal of that matrix, we obtain on-side-hubbard view corrections, and on the off-side of the diagonal, we obtain the inter-side-hubbard view parameters. So this method works very well. It was applied to many problems. However, there are a number of drawbacks, the computational cost, of course, then possible convergence issues, and it's not always easy to use it in practice, so quite low user friendliness. So we actually asked ourselves, can we have a better or more efficient reformulation of the same theory, but using more effective methods? And actually, we did it several years ago with Matteo Coccioni and Nicola Marzari. So I will show that we can go from supercells to primitive unit cells. So what do we do? We start with our modified quantum equations. I say that this is a small perturbation, which allows us to expand all quantities in Taylor series. Then we cut off those Taylor series at first order, so we keep on the first order quantities. And by doing so, we will actually, we can show that this modified equation will be actually a linearized quantum equation, also known as Stein-Heimer equation. So we can solve this equation, and moreover, we use the fact that the potential in a supercell can be decomposed as a sum of the Q points in the primitive cell. And by doing so, instead of using finite differences in the supercell, we're actually having an analytical formula in the primitive cell, where we need to sum up over Q points, which are different modulations of different monochromatic perturbations. We have a phase factor, and delta Qn are the responses of occupations to different monochromatic Q-modulated perturbations. And then we do the same post-processing step as before to obtain how about the UND parameters. So this is nothing but density functional perturbation theory that was, it is known for three decades. So it was used for phonons by Stefano Baroni, Stefano DeGeroncoli, Andrea Del Corso, and Paolo Giannozzi. So it's computationally much less expensive than using supercells like frozen phonon. It's easier to converge and user-friendly and autonomous. We implemented DFPT for computing how about parameters in quantum espresso, the corresponding code HP, it's public open source distributed with quantum espresso. On the left, I show you the scaling. In black, I show you the old supercell approach with finite differences in red is DFPT with primitive cells. So you will see enormous speed up. This is thanks in particular to symmetry because by using symmetry, we can reduce number of cue points. While in supercells, we can use the same trick. We can't reduce number of atoms. So we gain really a lot due to this. And on the right in the table, I show you the numerical comparison of the old linear response for supercells with DFPT and we see really remarkable agreement. The residual errors are just numerical noise. We can further reduce them. And then I also mentioned about Haber projectors there are really many choices in different electronic structure codes to have different types of Haber projectors. In quantum espresso, we have in particular non-autogonalized atomic orbitals, phi, which are provided with pseudo-potentials. However, disadvantage or drawback of these projectors is that a duplication of Haber view is actually twice in the overlap regions here, which is not wanted. So we can use octogonalized atomic orbitals instead, which is compute overlap matrices, invert them, and we construct these objects using loading octogonalization method. By this way, we get rid of this double counting or double application of Haber corrections in the overlap regions. Another very good option is to use maximally localized linear functions. This also, we're actually working on this, and but for us, the standard and the most accurate is the octogonalized atomic orbitals. And finally, we use the self-called self-consistent procedure when we're computing Haber parameters within DFT plus U plus V. How does it work? We start with initial structure, let's say experimental geometry, or you take it from some database. Then you perform structural optimization using DFT plus U plus V. With some initial guess of U and V, this could be zero if you don't have any clue, or it can be some good initial guess. Then we do ground state ECF calculation, and on top we compute Haber parameters using DFT. Then we check whether the Haber parameters are converged and if the geometry is converged. If not, we go back and we repeat this cycle until we get to the point where we reach the full self-consistency that is to say that the Haber parameters don't change, the geometry doesn't change, the electronic structure doesn't change. So all is consistent. And what we find in practice that this is the Haber view is a function of this number of iterations or cycles. If we do just one iteration is the one short calculation, we get something like slightly higher than five EV. But if we do this self-consistent cycle, we will find actually that the Haber view changes, in this case to 4.3. So there is a non-negligible change which really highlights that it's important to use this consistency. But I would like to mention that here we need to do structural optimization which requires Haber forces and stresses. So let me mention quickly, Yang Wu already mentioned in his talk previously, but what we do is we use the orthogonalized atomic orbitals and that our goal is to compute Haber forces and stresses. I don't have time to go into detail, but in a nutshell, we simply take the derivative of the Haber corrective energy with respect to atomic displacement style and this all boils down to the derivative of the occupations with respect to atomic displacements and this instead depends on the derivative of the square root of the overlap matrix which is not a trivial object to compute. It's not a function, it's a matrix to the power of minus one-half. And we have shown that it's actually known that one has to solve the Sylvester or it's also called Lyapunov equation. It's a non-trivial task, but actually we managed to find exact solution which can be written in a compact four over here. And we implemented in quantum espresso and benchmark with finite differences here and you see in the last column that the agreement between our analytical formula and finite difference is really good which confirms the accuracy and correctness on the derivation and implementation. So now let's switch to applications. So in my talk, the title you saw, it's all about lithium ion batteries. So let's try and use DFT plus U to lithium ion batteries. We know that our energy demands are growing due to the growing population and we no longer can continue using conventional sources because of pollution and global warming. We urgently need to do something with this. One solution is to use green or renewable sources like hydro, solar, wind. But for this we need storage energy systems like batteries on the large scale. However, the current batteries are great but there are safety issues as you can see on the examples here. So we need to search for newer, more efficient, safer lithium ion batteries. So let's try to use DFT plus U plus V. On the left I show you the lithium ion battery. We have negative electrode which is called anode and on the right we have positive electrode which is called cathode. Here it can be let's say graphite and here there are many options for cathodes. And one can do high throughput to look for good cathodes. And in between anode and cathode you have liquid electrolyte. And during the discharging process you have lithium plus ions going from the left to the right and electrons go in the opposite direction. So what happens at the cathode? So in the fully charged battery we have this compound. It's just an example but we have iron. All irons are in the same oxidation state 3 plus. And when we fully discharge the system we have this compound where we have iron and 2 plus oxidation state. And the interesting thing happens in between where we have the mixed valence compound with iron in 2 plus and 3 plus oxidation state. And I would like to stress that it's really important to have a very accurate method that gives us reliable energetics and electronic structure of these complex materials. And Hubbard functionals is one of the methods that can be used for these problems. We start with the self consistent solution of Hubbard parameters using linear response theory. So this is our simulation cell for this cathode material. We have four transition metal ions in this system. This can be all manganese atoms or all iron or it can be mixture of both. So we compute the Hubbard parameters. So here I report Hubbard values for four types of manganese at concentration X equal to zero, so no lithium. So we report U which are all the same values of Hubbard U and V is in some certain range of values. Yep, thanks. So then if we increase the concentration of lithium in the system we will notice that actually the first three manganese don't change much but the last manganese actually changed abruptly. The Hubbard U value for this manganese changed abruptly. We also computed the Hubbard parameters for the other concentrations and we can also check how Hubbard parameter changes for all transition metal atoms. We did the same type of calculations for the iron or the V, but the bottom line is that the Hubbard parameters are site dependent and they change upon lithiation or dilation. So what people commonly use they use empirical U the global value across all lithium concentrations for all transition metal ions, but actually what we can do and what we actually see that Hubbard U is not global it actually depends on the transition metal ion and it does change upon lithiation or dilation. We computed many properties that I would like to show you the most interesting. So let's consider atomic loading populations. Here I show three panels. The top panel corresponds to DFT and PB Sol. The middle panel hybrid HSC06 and the bottom DFT plus U plus V. On the Y axis I show you the 3D shell occupations of transition metal ions, manganese in this case. And the X axis is the lithium concentration and for each concentration we have four bars and each bar corresponds to four types of manganese. So with concentration of zero we have all manganese in the same oxidation state three plus and at concentration one all of those ions are in oxidation state two plus. When we're adding lithium one by one we see that only one transition metal changes occupations at a time. So we see this very clean and clear digital changes occupations. If you use hybrid functionally also observe the same trend, but if you use DFT and we know that there are self interaction errors we see that all occupations of all atoms change in the same manner because the electronic charge is delocalized and spread around equally on all atoms. So that's why you have, you see that you don't recover these digital changes occupations. We did the same analysis for iron. Again you see that DFT plus U plus V works very well. Instead here hybrid tries to do the good job but it's actually not as accurate as DFT plus U plus V because you don't really see the very clean and clear digital changes occupations. And again DFT fails like in a manganese or living case because of self interactions. We also studied another mixed valence material where we have manganese and iron. Again we can see the same trend that DFT plus U plus V works well. Hybrid also good but not as accurate as DFT plus U plus V while DFT fails. Also I would like to mention that for the concentration from zero to one half is the manganese that is redox active and then from one half to one is actually the iron which is redox active. So the bottom line is that DFT plus U plus V is very accurate to reproduce the digital changes in occupations which are really critical to recover experimentally observed redox plateaus in voltages. We also computed lithium and decalation voltages so all what we need is to use this thermodynamic equation. This is a total energy at lithium concentration X2 minus total energy concentration X1 minus energy of bulk lithium. It's a total energy workshop so all what we need is just total energies. And this is the result. We see that the last bar is the experimental voltage. DFT largely underestimates the voltage. Hybrid and DFT plus U add corrections. They overshoot over correct and only DFT plus U plus V is in a very good agreement with experimental voltages. We also did the same analysis for other cathode materials. We always find that DFT plus U plus V is the most accurate for computing voltages. We also check spin polarized projected density of states at concentration X0. We see that DFT plus U plus V and HSC06, they are very similar while DFT fails. When we start increasing the concentration of lithium, we see again that DFT plus U plus V has very similar trends to hybrid functionals while DFT fails. And also for other concentrations over and over we see the same trends that DFT plus U plus V mimics very nicely hybrid functional. We studied also other families of cathode materials. This is the so-called spin-all cathode materials. On the left we have anti-ferromagnetic insulator. On the right, ferrimagnetic insulator. Both have 56 atoms in the supercell to accommodate these complicated magnetic spin configurations. And the bottom line is that hybrid functionals are expensive, computationally expensive of these kind of systems and less accurate while DFT plus U plus V has a moderate cost. We computed different properties. I would like to show you just voltages. The same trend is in the previous phospholivine cathodes. DFT plus U plus V is in a very good agreement with experimental voltages. I'm coming to conclusions, but before I finish my talk, I would like to mention briefly about computational cost. So when we know U and V, DFT plus U plus V is actually only marginally more expensive than DFT. Computing these parameters using DFT is about 20 times more expensive than DFT, so it's not negligible. U and V from DFT allows us to generate some databases, and we can use those and use machine learning then to predict inexpensively hybrid parameters. So in collaboration with Martin, Urien, and other coworkers, we developed a machine learning model and on the Y axis is the predicted values on the horizontal one. It's the one we compute with DFT, and we see really good agreement. And finally, if you compare the DFT cost of computing hybrid parameters, which is HSE06, you will see in the last column that still the ratio of the timing, still DFT for computing U is cheaper than using hybrid functionals. And another slide, very important, there are some limitations and open questions in all this. So first is that linear response theory for closed shell systems gives very large values of hybrid view parameters, which is, we need to find a solution to this problem. Then U and V are position dependent, so we need to have these derivatives of hybrid parameters. I forgot to mention, but in our case, we neglect these derivatives, but in principle, one has to include them, as Jung will mention in the previous talk. Hybrid functions are not variation with respect to hybrid parameters, no proof of theorem. And DFT plus U plus V is a mean field, a single determinant approach, so no correlations, one has to use multi-determinant methods instead. And finally, the piecewise linearity in the Hubbard manifold is heuristic, there is no proof of theorem. And with this, I would like to conclude my talk. I've shown you that DFT plus U is really robust and accurate. It can be broadly applied to mixed valence functional materials. We compute U and V using linear response theory. And in the case, on the example of lithium ion batteries, I've shown you that the computed properties, electronic, electrochemical, are in very good agreement with experiments. And with this, I would like to thank my collaborators, Nikola Mordzari, Matteo Cococcioni, is also in the audience, Michele Cotilga, Francesca Coulante, our sponsors Marvel SNSF. Of course, thank you for your attention. And finally, I would like to briefly mention about the advanced quantum espresso school that will take place at the end of August this year in Pavia. If you're interested, please stay tuned on the PsiKi mailing list for more information. Thank you. Great. Thank you. Thank you, Yuri, and we are open for questions. If I understand correctly, as was, you know, replied in the earlier talk, the V is actually between the transition metal and the oxygen. Correct. So suppose I'm doing a calculation at the model level, which is, for example, if you take a cuprate, just a one-band Hamiltonian. Okay. Then, of course, you know, the oxygen degrees of freedom are integrated out. Then the, actually, the V term should be included in the effective view you are thinking about. So did you have any attempt of calculating that? And will that, I think that was the question asked by Zilke. So will there be some effective V, which actually, in terms of Hubbard model, that's the V between the transition metals? Okay. So V between transition metals? In our, from my experience, we have those Vs also, but they're actually vanishing. They're very small. So computing, we have them, but they're vanishing. No, no, but the point is, if you really down-fold or integrate out the oxygen degrees of freedom, they will show up in the one-band limit. We haven't done this, but it could be interesting to check. Let me take the microphone on the way. So we actually computed those from CRPA, and we also confirmed that the UPD, the interaction between ligand and D, is actually large. And then the question you're asking is can we actually down-fold or not, right? Other questions? When you use machine learning methods, can you choose the local environment of your atoms? So for example, in the lithium-ion battery you were showing, you see that the parameters depends on how much lithium is around. Can you do this also at the machine learning level, or you have enough data to do this, or you need to implement your algorithms in that case? Yes, yes. So the answer is yes. So in our model, we use occupation matrices as descriptors. So with the projection of contemporary functions on a localized orbital, this are the main objects. And so we do take into account these changes in the local chemistry. So what we need to provide to our model are these occupation matrices. So what you will see that when you change concentration of lithium, the occupations on transition metals change, and so we give this to the model, and the model realizes this, and gives on the output the respective changes in the Haber parameters. So yes, our model is sensitive to this local chemistry. Yeah, so we use how to build the database. We use DFPT, we apply it to a number of systems, we predict Haber parameters, and we also have the respective occupations. So we provide all this to the machine learning model. So the model sees, okay, for these occupations, I should have these values of Haber parameters. So you train it, and then let's say you take a new system for which we don't know anything. And what you need is just to compute occupations, and then you put this in the model, say, okay, now I have these occupations. What would be the Haber view? And the model gives you this certain value of the Haber view. Okay, thank you very much for the nice lecture. I see it is impacted on this lithium, and tachylation, voltage, and other electronic structure. But what do you think on other electrochemical properties like energy body or ionic diffusion, because we wanted to study this charge transport right and these cut-off materials? Yeah, it's a very good question. I think we just need to try. We haven't tried the, for those examples that you mentioned, but I think this method is generic, is general. We can apply it to any problem. So it's an open source. You can just, if you're interested, you can give it a try and see if it works or not. Yeah. Okay, thanks, Yuri, for the talk. Thank you. And so the next speaker will be Nong Arthrit from Utrecht University, and it will be about accelerating the construction of machine learning inter-atomic potentials using surrogate models. And we tried already, but... Okay, help is at hand. Okay, first of all, thanks a lot, Nikola, for the kind introductions and also many thanks to the organizers for inviting me here. Actually, my talk, it's slightly different from the rest of the talks from the communities or from this conference, but I'm really happy to be here. So my work mainly, we are developing machine learning models, try to understand the complex energy materials. And today I will focus on how we accelerate how to construct the machine learning potentials using surrogate models. And we develop our own machine learning model tools and also database, and we also share with the scientific communities. So the software package that we develop is the Atomic Energy Network. It is an open and free software package. It is really nice that the previous talk already introduced and give introduction on batteries, but I just wanted to repeat again that actually atomic simulation to understand batteries is really very complex and complicated, but understand many properties of material as atomic scale is very important. For example, the properties of interface and also nano transports according to interface is very important. And you can see that on the left-hand side, the interface of a solid state batteries that contain the order of lithium-cobal oxide, like you can see here, and also the interface of the electrolytes. This is like a disorder of lithium-cobal oxides or even the electrolyte, the light point here, and also the anode. And another size, this is the anode, lithium silicon. This is kind of like a amorphous structures of the lithium silicon, but what we are trying to understand the complex because it's so important at atomic scale. For example, the important domain like a nano size, like you can see here, or even interfaces and also amorphous phases. So what we like to do, we would like to make use of accurate and also reliable of first principles or the DFT that we can generate atomic structure at realistic and larger scale like this. So together with Columbia Center for Computational Electrochemistry, we also started to combine atomic simulation and also machine learning to develop database and also make use of machine learning try to understand such a complex interface. For example, in this cartoon that I draw. So we try to understand and characterization of interface which is like a resistance and also artificial like a degradation for example here. So and also many other parts as already you mentioned that cathode material also very complicated. What we do or what our work that we try to develop machine learning models that are based on learn from accurate and reliable electronic theories. For example, DFT how to learn the atomic interact. So we develop very reliable and accurate for example quantum mechanics or the DFT database and then we interpolate and also use machine learning models to learn and to construct efficient and accurate machine learning models so that we can really learn from the DFT database and then we get really efficient and accurate energies that we can use in application for example in Monte Carlo simulation and also can get accurate forces that we can use in realistic and larger scales in molecular dynamics simulations. So the model that I'm going to talk and discuss today so we use artificial neural network for the interpolation and is it really easy to use and also based on only atomic positions and it's also can really reference once we train and we fit it can get the results as accurate as for example DFT. This is the whole workflow of our work. So as a practice that you can see here so we need to get really reliable and also accurate database and then we select the data that we hope that it can really capture the application that we are interested in that we construct like a full potential energy surface and once we get the data and then we can select like a model selections and then model trainings and then we start to do model validation that you can see here and if we validate our model already if it's working or not we start to test by looking at the complex model that we're looking at in this case I show like the interface between solid and liquid for example if our machine learning model or machine learning potential is accurate enough it should be able to work directly as our intended applications but if it doesn't work we can also use another workflow of Fletcher assist active learning so that it can help to find and capture what the structures not include or not seen in the training set so we do this iterative refinement and maybe this is quite like a very abstract let's look at one concrete example we are using like a water model as example here as active learning construct such a training set which is also very important first we have initial data set and then we construct the neural potentials and then we fit and then we use the neural potential in the application for example MD simulation and then you can see that in X axis this is our structures along the MD trajectory that you can see here and we detect the binding energy on the DFT reference data in the red cross here and also our neural potentials along MD trajectories in the blank so we take a look at the errors between our reference data and your network you can see there's still quite high errors so the criteria is also depending on your application you can set up as you wish to have and then you can see that some structure if it's not close to the DFT reference data then we can include them in the training set and then we do like this iterative refinement so we implemented our models in the atomic energy network if you are interested in, this is the website we also plug it in and we have the ended library that can use directly in the many applications of software package for example like ASE, LEM and also Tinker also the L-Poly this is the references if you are interested in so let's take a look how we apply and also construct machine learning model for example to understand the Liam transport or more of a silicon for high energy density batteries as already showed before that batteries are really complex is for the positive electrode and also electrolytes and also negative electrode they are not the static, it's dynamics so the number of atom is changing all the time while the battery is short and this short like you can see here so we are trying to understand and try to optimize the composition so that we can find what is the best performance of materials so the first example we try to draw the full phase diagram of for example lithium silicon so this is the whole workflow sorry is it kind of like complicated but we would like to overall if we don't have data we cannot have machine learning models so the most important is that we generate DFT database so this is the initial that we can start for example we get the structure from crystal structures and then run some short AMD simulations and also distort structures by scaling the structure that we have and then we construct the primary results of neural potentials and then in the same time that you can also use the genetic engullitum to help to find a new phases by make use of the accurate neural potential that we have and we increase the temperatures that we are running also AMD simulation between 700 to 1500 Kelvin is depending on also materials and then we can also estimate the conductivities by using the linear plots and also can draw the phase diagram and also do structure analysis for example let's take a look at the first example this is a lithium silicon and for lithium ion batteries so we construct like a large DFT database of the lithium silicon that you can see here on the right on the left hand side this is a computed phase diagram that you can see both from the... this is the lithium in the silicon layer you can see so our reference data and neural network prediction is the same so in this plot I only show the DFT computed phase diagram so our NN potentials and also DFT can predict like both crystal structures of the lithium silicon and also high energy amorphous structures that completes the phase diagram the formation energies that you can see here and then in the same time we also compare the one test provides the blue that here this is as a room temperature and our one test provided predicted also include agreement with experimental measurement but we show is it in a few compositions based on the convex house that you can see here and to understand the lithium transport so we have also to make sure that our potentials or neural potentials really accurate for diffusion so we also create a model that can represent that different compositions neural network can predict the diffusion part accurately so I show only two examples the lithium 3.75 silicon and lithium 2.5 silicon here to show and the two parts here the X-axis is the NEV images of the part that it can move the lithium from A to B side so you can see that the two structures that we independent not include in training set that neural potential can really represent DFT a result accurately and then we use the neural potential to run long time MD simulation with different compositions that you can see and also from the beginning until long time MD simulation like two nanoseconds so neural network potential represents the DFT reference data accurately so this is really nice that we have efficient and accurate in potentials and then we can understand the compositions while we are running the MD simulation also different compositions and to make it more realistic we also looking at like the nanoparticles that you can see on the top of the figures and we also estimate the conductivities or the activation energies using the linear plots with different compositions so you can see that we can identify the atomic structures of different compositions and also how atomic rearrange and we also can learn that the diffusities that we can predict from neural potentials that can also provide the atomic structures so overall that we see that our neural potential can also predict the diffusities close to experimental measurements like you can see from the reference on the left hand side here and we can identify that the highest diffusities is between the lithium 1.0 2225 for example so once we learn this we can also optimize or try to find what composition is the best for our materials and not only that we also looking at a more complex interface for example our students and post of Chen Wang and how are you looking at a more fast like a lithium phosphorous sulfide and also anicate and also in one looking at the interface between lithium anode and also the electrolyte ethylene carbonate let's take a look at the first phase diagram by how you and also change so we also draw the phase diagram of the lithium phosphorous sulfide or LPS so we see that we use the same technique and once we use our machine learning models and also can map the compositions between the material like a P2S5 and also lithium 2S and we can draw so the blue one is the crystal structures but in realistic right when the batteries it is a moving like a charge and discharge the composition is slightly changing but it's not very easy to understand the properties so then we try to find the relationship between structure, stability and conductivities and this is the structure that we can extract from in this case I show as the crystal structures in the graph that you can see here but then we also want to understand the amorphous structures or aggressive ceramics like you can see here so when we know this type of structure already and then we can also estimate the activation energies like you can see that the composition that we found this is slightly closer to the parent's crystal structures on the left is the lithium PS3 and on the right the lithium 4P2S7 and in these structures and let's take a look because it's so many structure in our database so we also make use of another machine learning model like clustering, try to learn what is similar to the lithium environment and close to the parent's crystal structures and then we find the relationship between the structures of the amorphous structure here and I don't talk about the local environment here because we also take the structures to calculate for example spectroscopies and then we can also understand the oxygen state of like sulfur and also phosphorus but today I will only show the conductivities and compare with experimental measurements so to identify the new structure that we get from glassy ceramics it is also a compare like a reference with the previous work that done with the crystal structures like you can see it is a compare with experimental measurement but our work with the format of phase diagram we could identify the amorphous structures and then we found that it is have a higher conductivity than the perfect crystal structures so if I have enough time so far I only discuss like between database only energy only and we generate really large and big database but for the beginner sometimes how to start to develop your own neural potentials maybe it is easier to have some efficient tool so in 2020 together with April Cooper we propose one way that how to construct new potentials that include atomic forces for example in this case wire tail expansion so just to brief because I also discussed this in 2021 in this workshop already so as you know that the more data points include the training set you will get really accurate target potential energy surface in this simple linear zone energies in the gray region here of the hydrogen dimer as you can see and the data original symbol is the DFT the more data points you have you will get closer to the target like a linear zone energy so but we don't get large data set directly right when we start to construct our neural potentials how to do that this is also another visualization to show also for water that you can see that if you have more data points you will really have really accurate energy and forces that you can see here in this demonstration that you can see but most of the time or sometime that including forces in the training set is quite expensive and also take a lot of CPU time that's why we introduce another method by including approximate forces information which is kind of like a linear approximations by introducing include like a linear tail expansion you don't need to do new DFT data set but you introduce small displacements and then estimate the energies where the displacement that you have and include in a training set so like you can see here that if you have for example the blue one this is seven data points if you train your neural potential sometime you miss like the atomic energies they slightly predict inaccurate but when you include more data points here the cross like a minus x or plus x in this case the hydrogen dimer so you get more data points and then you construct a linear potential while use more data point from tail expansion and then you get the right energies and also the right structures again if you can using all force information direct force that you can generate from your first principle calculation that is the best way to do but it's just more expensive in this case we also demonstrate for water six molecules that you can see if you use all information like the force information as direct force training you get really very accurate forces like the absolute other forces it's really nice and perfect but it takes like eight times more expensive than include like a force information while tail expansion like you can see but using tail expansion very efficient but also improve the force information too this is for a molecular system I also did benchmark for a solid the cattle the combination of lithium and also some tantrum metals oxides like you can see that when we include the force information while tail expansion is also include or improve the force information and we also test sorry sorry so we also test by right in this simulation by using two type of calculation the first one only energy only so you can see that sometime the MD simulation also explore but include force information is this kind of very stable right so but in realistic we would like to look at interface right so you have more complicated system like a solid here as a lithium anode and also the molecules like a solid as a electrolyte so when we use for example tail expansion as a approximation is kind of like a difficult sometime is also too large and is also make a kind of like a not very easy to predict when you have like a complicated system that's why we are thinking that maybe include the same idea but include like a non-linear like Gaussian process regression that could help so we make use of by introduce or additional more data points by introduce non-linear Gaussian process regression that we can introduce more data points by getting the learning from the delivery values that you can see here and then we test this model again with hydrogen dimer on the left hand side this is the linear tailors but we introduce the displacements in many different regions of different displacements sometime with small displacements and also large displacements so sometime is also fail to predict the linear zone potentials but for the GPR model as you can see here that is fit perfectly this is just to explain what I just explained in the previous slides when we have like a very small displacements like a 0.03 angstrom so both metals are tailor additional data and also additional data from non-linear GPR both work very well but then when we include larger distances the tailor expansion slightly fail but for GPR it's working really very nicely and then we also extend this method model to our interface between the ethylene carbonates as the molecules and also the lithium anode so with the other distributions of you can see that with the forces that you can see by using the GPR model it gets much lower and using the tailor expansion also improve a little bit but only use PLDFT data points have a higher errors so both energy and forces show that using our additional data points include in the training set improve the quality or accuracy of the forces which can be really a good way to get accurate forces for our machine learning models in the future so this is my last slide that I just explained so we are still do benchmark on how to run MD simulations by getting more data points from the non-linear GPR models but in summary that we want to construct machine learning potentials to make use of the force information so you can see that we can reduce the reference data use less DFT data by actual DFT calculation but get more data points from tailor expansion and also non-linear GPR and in this case that we use more data set it also allows us to run stable or long MD simulations which it also will be good for materials prediction or to understand material properties so I hope that I could demonstrate that training machine learning models on data from simulation which is useful for accelerating the computational material discovery and also validate your machine learning model is very important especially compared with more accurate data or together with experimental measurement and also sharing your model and database it will be beneficial for scientific communities thank you very much thank you Nonga nice talk and perfect timing so we have time for questions let me run up there thank you for your very interesting talk I have two very quick questions the first thing is as I heard that you use artificial neural networks if I'm correct you mean multilayer perceptions have you compared this multilayer perception based like models to graph-based models if you know the crystal some very popular things called crystal, graph, convolutional neural network and a lot of things that are based on that have you compared it? the second one is I heard that you use it for molecular dynamics if you want to predict the dynamics you need a very precise potential energy surface have you do this before do the molecular dynamics before you test it's something like a NERGIST elastic band or dimer method to test if your potentials have the same potential surface as the DFT one and you have the same set of points I'm serious about this because it's very seldom to hear it somewhere you use multilayer perceptions in this field thank you for your talk thank you very much for the question the first question that you asked no, I haven't constructed or developed the graph network as a descriptor yet so the descriptor that we use here is based on the two-body and three-body like a local atomic environment so that develops first is it with the Baylor and Palinero and then we extend it to more easy to use by including the chip-shaped polynomials so that it is more easy to use by having many more order so you still bring a lot of prior knowledge to your network you're still okay yeah, thank you not the graph network but then the second question your question asks like is it okay to compare my DFT and neural network here then to see if it's accurate to use in MD simulations, right? the answer is that I just wanted to make sure that if the neural network potential that I construct is it really also work for the diffusing part or not as you can see that the activation energy is really very high here because this is not really optimized but just to see the part to get the images from the NEB calculation and then we just predict the structure that we get with alien potentials so and this is really just to show that it's really capture the structure that really not include in the training set yeah, thank you so much for your answer okay, thank you very much for the nice work and presentation so if I correctly understood it you said that it's possible to model this electrolyte interfaces using this machine this method coupling with DFT for this DFT or machine-lead MD so my point is what do you think about the stability issue when modeling these interfaces either in the anode or cathode parts? so this is quite complicated, right? this is just to demonstrate that you have really a mixed bond between solid that you have like a lithium is it like a solid? but our molecules like ethylene carbonate for example, in this cartoon they have mixed bond like oxygen and hydrogen or carbon and hydrogen here so we cannot, I mean it is nice that you can develop big data from the DFT directly but it will be really a lot and big data because this is also not accurate by GGA, by PBE plus one in a while that we already test so we need more accurate methods maybe hybrid functional which is very, very expensive because we check, of course we cannot find the experimental measurement to confirm so we try to look at formation energies or binding energies but there is no experimental measurement to validate our prediction that's why we try to develop really very large and efficient to get the method like a very efficient by generate like let's say artificial new structures but not doing the DFT by itself so this is the idea just try to use not too many accurate DFT or even hybrid functional DFT calculation but we derive more data or more structures by using like expand by Taylor expansions or expand by Gaussian process regression to get new structure and new energy and new forces that we can include in the training set so this is just the idea Thanks for the very nice talk I have two questions a little bit following up previous question so my understanding is that your training data set basically your training data points configurations of your system that you put into the neural network so in your slide 12 we are showing like a time dependent simulation so do you have like training data point for each of those times or these are predicted you train at a time and then you predict the time evolution so that's the first question I'm not sure if I understood your question so you see like if I predict also the time so in your slide 12 Okay 12 Yes so each of the red dots which are understand comes from the as general network are predicted at different times right so for each of those times do you have like training data points that you learn or you learn a time I don't know zero and then you predict a time as the times goes so in this case so we run long in the simulation with new network potentials and then we select the frame that we get from new network potentials and then recalculate with DFT so this is we get the red one so because we know that we construct neural potential based on DFT right Yes and then we use the neural potentials run long MD simulation like a five nanosecond or two nanosecond and then we select the frame from the A and M D and then we recalculate the DFT with that structure so you have new sample set at each time that you learn from that That's correct I see but in this case this is independent that we just run you see we run MD simulations of the water or structure that we have it here and then we predict DFT or that structure that recalculate single point DFT of that structure that we get and then we see the error bar I see between Interesting and then my second question is actually the structure of the neural network you are using he mentioned graph neural network you said you are not using that are you using fit for neural networks because if you use fit for neural networks I imagine that you basically take the structure of your system and then you have to basically flatten it into like a single layer that you input but then you totally lose the knowledge of the symmetries and so how do you do that in practice You mean we use fit for neural network Are you using fit for neural network? Yeah Okay then it means that you take basically different positions of your atoms and you flatten it as a vector to put in your neural network but then you totally lose I mean you totally lose the idea of the symmetries right? But actually it's sort of like that because we transfer from the atomic structures to the local environment right and the neural network only we learn just the number that you extract of the atomic structures and energies and then include and then transfer to the machine learning models so our neural network don't learn atomic position like XYC coordinate directly but we transform the XYC coordinate to set up the local So there's a preprocessing step Okay, I see Yeah, that's correct Thanks I skip this part, sorry because maybe it's already known before sorry, but we can talk about the descriptors later Thank you very much Great, thanks again, Naunga Great pleasure And the next talk will be given by Zili from ICTP here on the phase diagram of iron in Earth's core from deep learning a signal at five minutes So okay, good afternoon My name is Zili, I'm a postdoc here working with the central schedule So today I'm going to show basically is a different topic which is about the phase diagram of pure iron in the Earth's core from deep learning So this is the outline of today's presentation So I will start to show that actually we can apply the methods from computational material science to study the problems in Earth's science Then I will move on to the today's main topic the pure iron at extreme conditions So in order to study the phase stability I develop a neural network potential for that So to study the mechanical stability of the BCC phase then I will show you the phase stability of all the possible solid iron phases So this is followed by a very short, very brief summary So Earth is a planet we live So of course we want to understand what is inside Geophilic is the method you can use to decipher the density, the compression wave velocity and the shear velocities as a function of depth And based on these continuities we can divide the Earth into different layers The crust, the mantle, the outer core and the inner core And the shear velocity for Earth's outer core is zero That means it is in the liquid state And for the other parts it is in the solid state But that is not enough We also would like to know what kind of minerals that could be present at the Earth's mantle conditions and at the Earth's inner core conditions So this could be provided by mineral phyllox And in the figure on the left-hand side is one of my favorite So they are clearly summarized What kind of minerals that could be present at the Earth's mantle conditions So probably you are more familiar with this phase the phyroperapylicis So it's a magnesium-riched FeO So because it's FeO, no It's really high-debated about this phase at low mantle conditions But from mineral phyllox we cannot know precisely what is a relative percentage of different minerals So for this we need to know the thermodynamic properties for all these minerals But in order to check our predictions we need to know the elastic properties for all these minerals Of course, Earth is not static It is dynamic So you have a subatomic zone that means very cold, very heavy materials that will sink into the Earth's deeper interior And these very light and hotter materials will rise up along this calm and boundary In order to study this dynamic process we need to use fluid dynamics For this we need to calculate the viscosity of plastic properties and the transport properties for different minerals That is the input for this fluid dynamics But nowadays I realize the isotopic fractionation is a new trend in our field So in the past, so one of the central problems for Earth science is to understand all these properties for different minerals And in the past it's very difficult But in the last 20 years lots of methods has been introduced from computational material science that really make the geologist's life easier I just want to show a few cases just to explain to you the chemical reaction when the computational material science means Earth science This is from my previous work The first case I want to show you is about the deep carbon circle This is actually my master work At that time, people are more interested in the deep carbon circle in the deep Earth's interior We know that at the Earth's surface most of the carbon is stored in this carbon in the face such as the cedarite But from our calculations based on ab initio-large dynamic simulations we found it out at high pressure and high temperature it will dissociate into this this iron-rich carbon in the face applied diamond So from our study, we found it out this diamond is everywhere in the lowman conditions We also did a test and compared the performance of different exchange quality functional In this case, we found this hybrid functional performance better than the TFT-plus-U method The second case I want to show is about the formation of the moon This is actually my PhD thesis So after the upload mission people start to realize there are lots of similarities between the moon and the Earth In order to explain these similarities the Janin-Pen model is actually developed So in this model the mass-sized object will collide with the proto-Earth So it results in the formation of the proto-lunar disk From this disk, the moon is formed But for us, we would like to know the behavior of this impactor's iron core during this Janin-Pen process So for this we use ab initio-molecular dynamic simulations to calculate the Hugon lines The Hugon lines tells us the peak conditions that could be reached during this impactor process So as long as you give me the impactor velocities I can tell you the highest pressure and temperature they can reach We also calculate the entropy along these two Hugon lines Then after we obtain all this information we develop a thermodynamic model and we find out lots of iron vapor that could be produced during this impactor process Now after my PhD work I got the opportunity to continue my work on this pure iron to stay with the central scala So this time I will be focusing more on the feasibility of this solid iron phase And of course iron is the major component of the Earth's core Every properties about this phase is very interesting For example, we would like to know its melting curve We also want to know that its thermal conductivity at the Earth core conditions And of course you can go beyond that you can experiment determine the melting curve to the super-Earth conditions because we would like to know whether there is life in the other planets Due to these very important properties lots of work has been done in the past I just give you a very brief summary of this work So suppose the most stable structure before melting is the AGSPR Now after it all performed abinitial molecular dynamic simulations up to the inner core boundary they found out that the melting temperature at the inner core outer boundary at around 330 gPa the melting temperature is about 6,000 Kelvin But the problem is that the simulations use a very relative small simulation number of iron atoms is only 150 Probably it will make a difference if we use a larger simulation cell So a following work done by Lowy et al they performed molecular dynamic simulations with the embedded atom models they found that actually the melting temperature at the inner core outer boundary is only 5,000 Kelvin So it's actually 1,000 Kelvin lower than the estimation done by Lowy et al That will make a huge difference if the melting temperature is very low And the DPD is also present in the experiment work and one will experience support the high melting temperature at the inner core outer boundary and one of the work done by from the Keke-Lawson group they found out the melting temperature is still very low So my conclusion is that the melting curve of this ATCPR remains highly debated Then after the publication on the melting curve of this ATCPR phase people started to realize that probably ATCPR is not the most stable phase before melting And indeed the work done by Keke-Lawson and Ben-Noshko they found it out actually at high pressure and temperature the ATCPR phase will transform into the BCC phase That's very interesting I mean if the BCC phase is present at the inner core conditions we have a lot of stories that have been rewriting about the geodynamo However with the rise of the computational power a large simulation has been done by Goddwell in 2015 and they found it out actually the BCC iron is even not mechanically stable it is located at the saddle point along the beam path that means you deform the BCC iron and change its C-Wassay ratio but this picture is challenged again by Ben-Noshko in 2017 and the new phenomenon is emergent they found out in this BCC iron they found out this fast diffusion because this is a liquid component it will entropically stabilize this BCC iron phase So my conclusion about stability of BCC iron so it remains highly debated and of course probably BCC iron is the most stable phase and there is only one work to support this idea as you can see in the figure on the right hand side that actually at high temperature at 6,000 Kelvin that the cheap energy of the BCC phase is lower than the AGC phase So to summarize the similar dynamic stability of the solid iron phase it remains highly debated and in our work we would like to solve all the problems in one piece of work So for this we want to we develop a new one level potential for iron and the packaging we are using is called DPMD which is developed in the robot cars group and I won't give too many details about this method because it's already a dedicated talk on this I just want to stress one more point that you can actually treat deep learning as a function with atomic positions as an input and the energy force and stress as the output and there's nowadays there's a lot of packages you can use to train the neural network potentials and they differ a little bit in the atomic descriptor and the neural network structure and in DPMD they use some embedding techniques to construct this atomic descriptor as you can see that it's all matrix operations that is very good for GPU calculations because nowadays GPU is a new trend and they also try to use a regular neural network structure which is greatly enhanced the learning performance for very deep neural network So after we train the neural network potential for iron we use an independent test data set to characterize the performance of the DPModels As you can see from figure on the left hand side the x-axis represent pressure and the y-axis represent temperature we actually characterize the performance of DPModels on energy, force and pressure over a wide range of pressure and temperature conditions and for one of this column, this column that from top to the bottom it represents a different phase such as liquid, this is FCC and ACCP and BCC phase And as we also check the performance of the DPModels with the different number of atoms just to make sure that its performance remains stable if we want to perform larger scale simulations As you can see from the upper panel that the root mean squared error of energy is less than 6 MeV per atom Think about this, we are performing simulations at 6,000 Kelvin the KBT is very large and the error on the force so in most low case is less than 0.3 MeV per angstrom so that's super accurate for the pressure in most cases the root mean squared error is less than 0.6 GPA We also did a comparison between our DPModels and the E-M potential developed by Noshiko As you can see that the root mean squared error of force predicted by E-M potentials in the most cases is larger than 1 MeV per angstrom so it's huge After we check the accuracy of these DPModels the first questions we want to solve is the mechanical stability of the BCC iron phase For this we use different techniques and the first technique we are using is to directly calculate the elastic constants I just remind you that for FCC or BCC phase there are three independent elastic constants the C11, C12 and C44 and the stability conditions requires that the C11 must be larger than C12 and C4 is also positive For this we use two different stream metrics we deform the BCC iron lattice then we run MD simulations to obtain this result the stress tensor from this based on this we can estimate the elastic constants so after we obtain the elastic constants we estimate all the pressure temperature conditions that is C11 is equal to C12 so above this BCC instability line the BCC iron phase is mechanically stable and below it is not mechanically stable We also check our predictions based on these elastic constant calculations for this we take several conditions along this instability line then we deform BCC iron along this beam path by changing in C was a ratio after we deforming this BCC iron we calculate the stress difference and from this we can calculate the work that is done to deform the BCC iron and in the figure on the right-hand side this tetragonal distortion T you can treat it as auto-parameter if it is equal to one that represents this structure is the BCC phase and if it is equal to 1.26 it corresponds to the FCC phase so based on the work that is done to deform from the BCC phase to FCC phase we can sum it up to calculate the free energy difference between the BCC phase and the FCC phase however from our calculations we found out that actually the stress difference does not change continuously as a function of the tetragonal distortion T that means we cannot there is a phase transition here we cannot use this method to estimate the free energy difference between the BCC phase and the FCC phase but we can actually integrate this stress difference like around the T equals to 0 to show that actually the BCC phase is located in a local minimum so now we finally sorted out the mechanical stability of the BCC diurent we found out that the BCC diurent is a mechanical stable at the earth in the core conditions so we move on to calculate but we would like to know whether temperature alone is enough to mechanically stabilize the BCC diurent phase so for this we use a shower I want to give more details about this method because in Darwin there is a dedicated talk on this so from a shower we can we can obtain this auxiliary form of spectrum it agrees very well with the previous studies use a very similar approach but the problem is that for this strongly harmonic system whether this auxiliary form of spectrum is meaningful is problematic so there is another method in a shower which is called position of free energy Hessian which is a second order derivative of the free energy with respect to atomic positions so as long as you find this unstable mode like here a collective motion along this imaginary mode that will decrease this free energy of the system that means in this case it is not mechanically stable we also consider the effects of temperature we found out the temperature will greatly stabilize the BCC diurent as you can see here at 4000 Kelvin the end point the frequency at the end point is negative but with increased temperature the unstable region shrinks towards the brilliant zone center around the gamma point however from this calculation we found it out even at 7000 Kelvin BCC diurent is not mechanically stable so there is a difference between the calculation with the shower and our elastic constant calculations that actually reflect the contributions of the self diffusion so the self diffusion will stabilize the BCC diurent at the highest temperature at 7000 Kelvin so our research highlights the roles played by self diffusion in mechanically stabilize the BCC diurent then we want to calculate the Gibbs free energy for different phases because we want to build the free diagram for this we use thermodynamic integration method to calculate the Gibbs free energy for the FCC phase for the IGCP phase and for the liquid phase basically in thermodynamic integrations you couple your system of interest with a system that has a known free energy but this method does not you cannot apply this method to the BCC phase because BCC phase has a self diffusion if you do this you will know the simulation will not converge so for this for BCC diurent we use a two phase coexistence method in this method we put the solid iron and the liquid iron in contact then we run simulations in the NVE ensemble from these simulations we can obtain the equilibrium pressure and temperature but as you can see here there is an interface that will really affect the final results but we do not have such problems because we can systematic test the effects of the number of atoms on this melting pressure and the temperature and during these simulations we constantly monitor the structure change just to increase whether there is a new phase that is present and this is a in this figure I showed the Gipsy-Frander difference calculated with the DP models and the different color represent different temperatures and this dotted line represent the liquid phase and this solid line represent the BCC phase and this dashed line represent the FCC phase and we treat the IGCP iron as a reference as you can see at 5,000 Kelvin near the melting point of the BCC of this IGCP phase the Gipsy-Frander difference between FCC and the IGCP phase is on the order of 10 MeV atoms so we now start to worry about whether the accuracy of our DP models is enough to distinguish these two phase so for this we use a free energy perturbation theory to check the accuracy of the Gipsy-Franderity of our DP models so in this method we first run simulations in the NPT ensemble then we extract several snap shots to run this density function theory calculations from these calculations we can get the energy at the DFT level then we plug in back to this formula we can calculate the Gipsy-Frander difference between the DFT potentials and the DP models and from a figure on the right-hand side you can clearly see that actually the accuracy of the Gipsy-Franderity in our DP models is quite good usually in most cases it's less than 3 MeV per atom the most interesting feature of our DP model is that its performance is really stable with respect to the number of atoms just to give you an example let's say for the IGCP phase the friend difference between the DFT model and the DP model at around 200 atoms it is 1-1 MeV per atom if we use a much larger simulation cell at 500 atoms it is still quite stable to minus 1 MeV per atom so that's really good because that may indicate that our potentials can be used for large-scale simulations and this is the major conclusion about this work that we finally sorted out the thermodynamic stability for all the solid iron phases and in this figure this blue line represents the melting curve of this PCC iron and this green line represents the melting curve of the FCC iron and this red line represents the melting curve of the AGCP iron as you can see that AGCP iron has the highest melting curves so that means AGCP iron is the only stable phase before melting there's no solid phase transition at high temperature another feature is that the melting curve of this PCC iron phase at around 4000 Kelvin if we extend this blue line to around 4000 Kelvin is actually 4 within this instability region so that's the reason we cannot perform the simulation here to determine the melting temperature of the PCC iron and the inside is a short comparison between expanding work and from our calculations we do support the melting temperature at the inner core outer boundary is very high it's around 6000 Kelvin so after we know that AGCP iron is the most stable phase we would like to know whether this phase yield a seismic velocity that is compatible with the geophysical observations for this we are calculating the shear velocities as a function of temperature and the inner core temperature is very close to the melting temperature of the AGCP iron phase as you can see that at the melting point of the AGCP iron the shear velocity is still very high comparing to the geophysical observations which is about 1 km per second high than geophysical observations so in our previous work that is a septic velocity sample that we suggest that it is probably caused by the effects of viscous green boundaries because we need to transform or translate our single crystal elastic constant to the shear velocity for polycrystals and there is another work that suggests that probably the low shear velocities is caused by the Schupannic AGCP iron alloys that means you put a lot of elements in the interstitial side of the AGCP structure but we found that actually the BCCR has a very low shear velocities that is compatible with the geophysical observations that is very easy because now we found out BCCR is not thermodynamic stable but it has very good geophysical properties and this is my summary so finally we determined the free stability for the pure iron up to Earth's inner core conditions and it remains debated about the origin of all the observed low shear velocities in the Earth's inner core and thanks for your attention Thank you for the great talk and we are open for questions You should have a very... very large energy difference Besides... Can we trust the DFT energies or there are Monte Carlo estimation of the relative error of the DFT? So what is the scale? Besides because you study the error due to the neural network Yes, yes, yes So the energy... Eventually it could be that BCCR is more stable because the DFT is not predicting the right energy So the energy we are using here is the DFT level We use free energy for being theory but the problem is that we actually did a test with the different exchange correlation function and we confirmed that our conclusion will hold the different exchange correlation function but the problem is that we still don't know the accuracy about this DFT at high temperature That remains a problem The normal quantum Monte Carlo simulations usually put the electron temperature at T equals to zero but we will not hold for this case because the electronic temperature is very important that will greatly stabilize the BCCR interface So can you help me just understand a real basic thing You have a very nice curve up there for different structures and you've shown us some stability arguments but if I look at that curve which phase is the most stable? It's ACP, yes It's ACP and how do you know that from that graph? How do you know? Because let's say let's compare the melting curve of the BCCR and the ACP at this conditions the cheapest free energy of the liquid phase is lower than the BCCR but it is still less than the cheapest free energy for the solid ACP phase So that means the ACP is the most stable phase Okay, so if you had a real melt of liquid and you cool it then you would see that structure is more stable It would be the first one Thank you When we tried to predict the Grunizen parameters from in our case we were using the Gaussian Approximation Potential for BCCR It was incredibly difficult and one needed to have absolute K-point sampling convergent to make sure all the training was consistent Is this something that you also observed or do you have any comments there? We didn't really calculate the Grunizen properties but for our machine dataset it's always larger than 200 items then we use 2x2x2 K-points to assemble the Grunizen Yeah, but we were surprised because in an equivalent system with just say one atom of iron we needed to go up to 40x40x40 sampling or equivalent dense sampling on the supercells to actually be able to get the right Grunizen parameter and the way we interpreted that was that the machine learning was getting a little bit confused by having supercells of varying size somehow still some remnants of K-point sampling but maybe GAP is more sensitive also to this Probably I guess you're working on the lower temperature one probably now the temperature here is very high so the density is very smooth Okay, I think if there are no more questions we thank Z and all the speakers I'm looking at Nicola Seriani here we are meant to start at 2 o'clock but maybe we should do 215 so everyone can enjoy ICTP culinary might and so we meet again at 215 here So I think the kind of the kind of the kind of the kind of okay so that would be the explanation so I think this problem of overthinking is general, where it means that some of the robots already have a general understanding from a sense point. You should have come to the psychic conference, but it's something wrong. We should get together. I think that we just meet, and with the use of sense, and each of us can say something about the program. You can say something about it. But maybe you can still say a few words, one minute, and then I don't know. Or maybe you can say something about the sense point, and then maybe I can say one minute about the sense point, for example. So, I think there will be, I mean, because the program is there, I don't know, other people say, we spend one minute each playing, and then everybody needs to be after the science, and then see what's going on. It's not very factual, I mean, as I am, but I'm not sure. Well, well, if that's the least, if that email is there, everybody needs to be after it. So, if I could hold a few of some power involved, if I could adapt it to my mind. Yeah. Are you trying to show it off? I don't know. I don't know. If you read something about the robot, it's very difficult. That's why I said it's difficult. Oh, OK. Am I saying wrong? Oh, yeah, well, we're going to do that. Great. That's fine. That's great. OK. Well, I will. Yeah. But, you know, it's just a model. Just a model. Yeah, just a model is fine, right, to understand what's going on. Oh, well, we could make it general. I know. I'm hoping that not everybody would otherwise be unmanageable. Yeah. But we can say that. We can say that. I still suspect that a few people will read right out of the end. A few people will not read the rest. So I guess the beautiful side of it is that maybe people will finally read it. Yeah, we can say it through. Well, I said, yes, it's here. There are only the three of us. When you're coming. And the short one, right? So we can, yeah. Maybe Friday. Yeah. Well, this is only the three of us. But between at the end of the day. So you work also on battery? Yes. Yes. Well, yes. Yes. I cannot show the only thing. I know what is going on. I cannot bring it up. It's a really tough field. No, no. Yeah, I know. Yes, yes. OK. In that case, this is a different case. I think there are many, many reasons. Expert in the repeat, probably not in the problem. We have the chair of the last session setting up. It's tough. Okay. So I'm not actually personally acting on it. Yeah, yeah, yeah. This one? Yeah. No, no, no. No, no, no, no. At the wedding? Yeah. No, no, really, thanks a lot. It's the same problem. That's exactly what it's like. I cannot read this to you. Yeah, yeah, yeah, yeah. If you were married, so you can't keep my jealousy. That's true. We have an order. Yes. And... Yeah, yeah, it's the same thing. We indicate. Yes, the whole once. Actually, we're very impressive. So we have a lot of rules. It's really... Yeah, yeah, yeah. And in principle, the formula is the numbers, and you get the number. Exactly. The number of complications. Yeah, that's the problem. Yeah, yeah, yeah. Exactly. That's what we say, though. But at the first stage, it runs. You know, they ask you... No, no, don't ask. No, thank you. So I tried my own laptop, but it didn't work. Otherwise, you can upload a video. I had some video. No, no, this is the desktop. So you can try. Yeah, can you imagine? Yes. So I knew that... I'm here. I might. So I knew my best. Yeah, make it very important. Who are they? They're the lack of one. They're the lack of one. They're the lack of one. I'll stop. I'll let you know your things. Thanks very much. Okay. I'll ask you if you can remove the badge. I'll try it. Okay. Yeah, quickly. Without spoilers. Okay, towards... Without spoilers. Without spoilers. The question answered, we want to let it continue. Yeah, so the pipeline is... Yeah, yeah, 20%. Now we can't take off the... Yeah, well... That's good. We'll be short. Yes, yes. I had another thing that... In the program there is written the wrong application. They call that one is the... Okay. Thanks. Just to upset Nicola. Yeah, I mean, I think it's my fault. It's my fault. Next to you. Yeah. Who should I ask for uploading the talk? They want to upload the talk. Okay. How are you? How are you? I'm good. I'm good. We'll be here around the... This bill. Okay. Okay. Yeah. Thank you for the questions. You have your own. Yeah. Okay, welcome back. So we have talks from two different sessions now. The first one is a session called Methods for Phonons and Thermal Transport. Oh, I forgot to introduce myself. Sorry, I'm Shobana Narasimhan from the Jawaharlal Nehru Centre for Advanced Scientific Research in Bangalore and India. So we will now have a talk on thermodynamics and dynamical properties of strongly anharmonic and quantum materials. The talk is by Lorenzo Monacelli, who is from EPFL in Lausanne. Thank you very much for the introduction. And I want to thank the organiser for giving me the opportunity of speaking in this very broad audience today. I am Lorenzo Monacelli. And I will tell you something about how to compute the thermodynamic properties in materials and dynamical properties in materials that suffer from strong anharmonicity in the ionic displacement and also have show sizeable quantum fluctuations. So we are interested in computing thermodynamic properties because it's the study of a system in equilibrium and in particular we are interested in linking what are the microscopic properties of a material that we can simulate in a computer to the mesoscopic properties that are, for example, described by temperature, volume and number of particles. And these links is done by statistical mechanics and in particular by quantity, which is fundamental in this field, which is the free energy. So if we know the free energy of the system, we can establish which is the equilibrium a specific thermodynamic condition. For example, in a canonical ensemble, the minimum free energy at fixed temperature, volume and number of particles determines what is the equilibrium. And so we can use this property to compute phase diagram. Now, from thermodynamics, the free energy is simply the internal energy minus the temperature times the entropy and this is linked to microscopic properties of the system with this equation, which is quite simple, which is just the internal energy is just the average of the Hamiltonian and the entropy can be computed with Boltzmann definition, it's just the average of the logarithm of the density matrix. Now, in this equation here, in principle there are both ionic degrees of freedom and electronic degrees of freedom, but for thermodynamic room temperature or temperature which are lower than thousands of Kelvin, the electronic degrees of freedom are not so important, at least directly, because the temperature is not sufficiently high to excite electronic transitions. Therefore, in most cases, we can do a very good approximation, which is the Born-Oppen-Eimer approximation, which consists in integrating out the electronic degrees of freedom and treating the system as it is always in the ground state of electrons and therefore the degrees of freedom that I'm describing here are ionic degrees of freedom, so are ions, and these density matrix actually describe the ionic positions and this Hamiltonian here is the Born-Oppen-Eimer Hamiltonian, which can be computed at any level of theory, densifactional theory, more sophisticated one like quantum Monte Carlo or even machine learning potentials. Now, the free energy is very important because it not only allows us to describe which is the equilibrium of a system, so the phase diagram of a system, but also its thermodynamic properties through the derivatives of the free energy, so the derivative of the free energy with respect to volume gives us the pressure, for example, and this allows us to compute for thermal expansion of a material and if we do other derivatives, we can compute thermodynamic observables like the bulk modulus, the specific heat, the heat capacity and the thermal expansion and so we can completely characterize a system of its equilibrium properties by just knowing the free energy and its derivatives, so it's very important quantity. Unfortunately, it's also extremely challenging to compute free energy from a simulation and the reason is that these average here cannot be done by a simple sample of the potential energy like we are used to compute observables and the reason is that this logarithm has an important contribution from region of the phase space which have a very low probability of occurring in a simulation and in the equilibrium condition. Therefore, we cannot compute this integral using standard approaches. However, there are methods have been developed to compute free energy and one of the most effective method is thermodynamic integration and in this method, what we actually do we define an auxiliary Hamiltonian from which we know how to compute the free energy because it's analytical. For example, it could be an harmonic oscillator and then we parameterize a Hamiltonian with a parameter lambda so that this Hamiltonian is equal to the auxiliary Hamiltonian when lambda is equal to zero and is equal to the exact Hamiltonian when lambda is equal to one. And then we can integrate between lambda that goes from zero to one and get the exact free energy. So this equation here is exact and it's a way of computing the free energy and the cool thing about this equation is that actually this derivative of the free energy can be computed using standard sampling techniques because this object here is actually unobservable that can be computed with standard approach and this is just the difference between the exact potential for example the one we get from density functional theory and the auxiliary one that define our auxiliary Hamiltonian. Now, this is, as I told you, a feasible approach so there have been already many calculations using exploiting this approach but it can be extremely expensive anyway and in particular this is true if our system suffers for sizable quantum fluctuation because then sampling this object here for many values of lambda between zero and one becomes challenging because molecular dynamics for example is not any more a good way of doing this sample we need to rely on more sophisticated approach like path integral and doing this with many values of lambda could be challenging. It is not impossible. Some cases have been achieved but it is still computationally extremely expensive so we wonder is there a way of starting from this exact expression to get an approximated one which is more feasible computationally and the first thing we can think of doing is actually to assume that this value here inside the integral is a constant and then we can take out from the integral and we approximate the energy in this way. Now this is accounting for an harmonicity because this term here is correcting our Hamiltonian accounting for all an harmonicity and we can prove something which is very important for this approximation here is that this approximate symbol here is actually lower equal so we can prove that the exact free energy is always below the free energy we obtain with this approximation and this is very interesting because while this is already a quite good approximation for the free energy depending on the starting Hamiltonian auxiliary Hamiltonian we can do more. We can now say that among all possible trial Hamiltonian that we can use the one that give us the lowest free energy as a final value of this equation will be the best one because we know that the exact free energy will be always lower than this value and this will allow us to formulate a variational principle in which we can say okay, now we can choose among all possible Hamiltonian the one that give us the minimum value of the free energy and indeed we cannot choose all possible Hamiltonians but only Hamiltonians for which we know how to compute the free energy and these are for example harmonic Hamiltonian and that's how we define the self-consistent harmonic approximation in which this Hamiltonian is chosen as the most general harmonic Hamiltonian and optimizing this Hamiltonian just means optimizing the parameters which are this auxiliary force constant matrix and the average centroid position the centroid position of the atoms in the system and by optimizing this we can compute the free energy and the cool thing about that is that we don't need to evaluate to do any kind of molecular dynamics here because we know how to sample this Hamiltonian analytically and then we need to fluctuation very quickly so we don't need to thermalize anything we just need to extract we need to perform the minimization but we need just to extract randomly distributed atomic configuration and compute this average here and this is extremely efficient and can be done at the initial quite easily and I will show you an application in which we did this with Nicola Marzari very interesting which is the case for the halide perovskite which are a very important family of systems for application in perovskite solar cells which is the fastest growing solar technology nowadays and the very cool thing is that one of these perovskite halides reached the power conversion efficiency of 25.2% which is competitive with silicon which has been achieved after 60 years of optimization while this is pretty young candidate the problem of these candidates is that they contain lead which is toxic and so we need to find a substitution to lead which can be feasible and this perovskite here with tin is one of the most promising eco-friendly solution but while its performance are competitive with the one of lead it has a major drawback which is that it is unstable if we expose this system to the air and what happens actually is that it tends to oxidize and it tends to it decomposes in a different phase which is not perovskite it is a yellow phase which then further oxidize and transform and degrade the performance so if we can prevent this perovskite to transform in this yellow phase it will be extremely useful and the problem with this perovskite is that we don't know the real phase diagram and the reason is that this is a phase rich material which have the old perovskite polymorphs from cubic, tetragonal and all to wrong because different temperatures if the system is synthesized under inert atmosphere but if we take this system and we expose to the air it rapidly change the structure into this yellow phase which can be then restore the cubic structure by rising the temperature above 425 Kelvin so this is a quite complex phase diagram and it's not clear which one of these two phases is actually more stable so is the air simply pushing this perovskite structure toward the real ground state which is the yellow phase or rather it's just a surface effect of some kind of impurities that generate at the surface and so a more complex effect and to answer the question the only way we can do with calculation is computing the free energy of the two phases and compare them and this is the result interesting result to get so I'm comparing the free energy of the yellow phase here in orange with the free energy of the perovskite structure and what we observe that in the working condition which are between 300 and 350 Kelvin we observe that actually the cubic phase of the perovskite and all the perovskite structure are more stable than the yellow phase so it seems that the perovskite structure is more stable and so this formation of the yellow phase is not an intrinsic effect but it's due to surface and impurities and other more complex effect but that can be treated with better engineering of the system and we can also go beyond and if you wonder about the accuracy of this free energy I'll show the next slide in which we thanks to the derivative of the free energy we can compute, characterize the thermodynamic properties of the system and for example here we relaxed the volume including all an harmonicity of finite temperature thanks to the fact that we can compute the pressure to the derivative of the free energy and we computed the thermal expansion which is in very good agreement with experiment and also the transition between here you see this angle here that the difference between an orthorhombic cell when theta is different from 90 degrees to a rhomboidal cell when theta becomes 90 degrees and as you can see the system is transit the experimental transition temperature is 360 so if you look at this line this point here is 350 Kelvin so it's very good agreement with experiment and also the transition between the rhomboidal structure and the cubic structure occurs when the A and C meter becomes the same and as you can see here this occurs at 450 Kelvin in our simulation while the experimental transition temperature is found to be 440 Kelvin so again a remarkable good agreement which seems that this method is very powerful and very predictive so this is another example that we recently that has been recently accepted in natural physics and this work is done in collaboration with Marsula which we present tomorrow and so probably we give you more details about this on the phase diagram of hydrogen of high pressure hydrogen here we are at 0 Kelvin so all the anharmonicity is given by the quantum fluctuation of the H2 molecules and you can see here the difference between the static phase diagram so the enthalpy you get from a standard DFT calculation and what you get if you add on the top of the static enthalpy we get 0.0 energy which was the state of art before this work and then we can add the total contribution of the free energy using this self-consistent harmonic approximation and as you can see it's the phase diagram changes completely and in this last term here we observe a first molecular to molecular transition between phase 3 and phase 6 of hydrogen and then we observe the atomization at 577 Kelvin and this is very important because the atomic phase of hydrogen is predicted to be a room temperature superconductor so it's very interesting and the interesting thing about this work is that before the so we reproduce this transition temperature which has been observed by an experiment to another molecular phase and we did the calculation also for deuterium to try to predict what is the isotope effect and we obtain a shift for this transition of 32 gPa and after we publish on archive this work almost contemporary work by Paul Hubert measure this isotope effect to be 35 so we were both of us not aware of one of the other work and we got a very good agreement with the experiment so I hope this can convince you that this method has really a very important predictive even in very difficult system like this one so but these are thermodynamics so this is to characterize thermodynamics but sometimes we want to do more and we want to study the phonons however there is a caveat on all these methods both the that even similar methods like self-consistent phonon temperature dependent energy landscaping and this similar methods as these have is that one is tempted to take these phonons defined by the harmonic auxiliary Hamiltonian and define phonons through them but unfortunately this is not correct because this Hamiltonian is just an auxiliary quantity to describe the thermodynamics of the system to compute the phonons we need to ask ourselves what a phonon really is and a phonon is an excitation of the lattice so unfortunately a phonon is a dynamical quantity intrinsically speaking while this theory is a static quantity because it's based on free energy however there is a case in which the excitation of the lattice can have a zero energy so it can be described by static theory and it is when we have second order phase transition in which the displacive phase transition in which we have a lattice excitation whose energy goes to zero and trigger a symmetry lowering of the system and this can be computed with the static response function which is related to the Hessian matrix of the free energy which is not the same thing as the auxiliary force constant and therefore we can compute the Hessian matrix and let's take for example the example of the perovskite I showed you before this is the CSSNI3 perovskite in the harmonic phonon dispersion in the cubic phase you can see we have imaginary phonons in this region which means this phase is unstable at zero kelvin which is common for many materials and then if we do the simulation of the Hessian matrix at 250 kelvin we see a strong renormalization of all the bands and as we increase the temperature this soft mode starts increasing up to 450 kelvin when finally it becomes stable and therefore say that the transition temperature is of this kind and occurs between 400 and 450 which again is in extremely good agreement with the experiment in which this occurs at 440 kelvin and however the phonons we get, the band structure I show you is good only to compute second order phase transition, again those are not the real phonons of the system the phonons are dynamical quantity they must be computed with a dynamical theory which is the we can compute them as the poles of the dynamical spectral function obtained by the green function of the lattice and to do this in the study case in the case of frequency equals zero the inverse of the green function coincide with the free energy Hessian so the theory matches we need a dynamic theory to compute phonons and this can be done with the time-dependent self-consistent harmonic approximation in which instead of optimizing the free energy we find a dynamical equation that makes the action constant so that kill the derivative of the action and we get this self-consistent equation which is very close to self-consistent time-dependent or time-dependent density functional which is our theory and if we take the linear response of this equation we can get the green function and this can be done, for example the details of these are done in this paper here in which we show the implementation of a Lancho's algorithm which is similar also to what is done in time-dependent DFT to achieve this and actually I can show you the results again I can write the cubic phase of the CSN i3 and in this case we see this is the spectral function and you can see how this system actually is really an harmonic this is not the logarithm of the spectral function, this is the spectral function and so it's really broadband and as you can see all the phonons here mix one to each other so in many cases it's not even is what and they overlap and this is very important and relevant if you are studying thermal transport for example and there is a poster this afternoon by Giovanni Caldarelli which we talk more about this and I think also the talk tomorrow by Michele Simoncelli if you are interested we can also plot the same quantity, the spectral function resolved mode by mode to have an idea on understanding what is happening and how, why this spectral function is so broad and you can see a lot of satellite peaks and this is very very interesting by itself but we can also use the same technique to actually compute Raman spectrum infrared spectrum which has dynamic quantity and here I am showing the hydrogen Raman spectrum of the phase 3 at 250 gPa and the comparison between the harmonic approximation the auxiliary frequency you get from the self-consistent harmonic approximation and the full time-dependent self-consistent harmonic approximation result compared with the experiments and as you can see if you don't fully account these dynamical properties and it's an important shift it is completely non-negligible almost 50% of the energy which is relevant if, as in this case Raman spectrum is used to determine which structures correspond to which phase because hydrogen at this high pressure is very difficult to probe with other techniques and this work has been done by Antonio Siciliano who will present a poster tomorrow to go and see his work directly. So these are my conclusions so I hope I convince you that the self-consistent harmonic approximation is a very good way of computing free energy and its derivatives and to calculate phase diagrams and low-thermodynamic properties and this presentation comes with a big warning for any of you that are working on phonons that if you use this approximation be careful that the agent value you get from these auxiliary Hamiltonian are not the real phonons but they must be corrected and the real phonons can be computed with a time-dependent extension of the theory. So the code to do this is all open source is available at this website www.ssca.eu and we are organizing currently a school about the code between 26 and 30 of June at San Sebastian so more details are available on the website and the full program will be uploaded in a few weeks hopefully and I want to thank all the collaborators that make this work possible Francesco Mauri, Michele Casula Nicola Marzari, Antonio Siciliano and Giovanni Caldarelli and thank you very much for the attention. Thank you for a very interesting talk questions. Thank you for your talk it's really interesting how would you estimate the error bar for your energy from the SHA and compared to the thermodynamic integration method. Thank you. So this is a very nice question actually in this case the error bar is present but it's not visible so it's present in the problem because the dots are bigger so you can see a bit there here the error bar and we have two kinds of errors so this is a stochastic method which is sampling this average to a stochastic approach so we have a trivial error bar which is given by the fact that we have a finite number of configurations and this is trivial to estimate and this is provided by the code that the output has an error and this is what is brought here then there is a second error which is more difficult to estimate which is what is actually the error of the approximation and I mean the point is that what you can do is to try to not describe just one quantity after one simulation you get a full thermodynamic description you can compare with other quantities and then see if some of these quantities are known experimentally how they compare like in this case so we have other phase transitions that we know where they are and so if there is a very good agreement then it's very likely that a good approximation a curiosity when the temperature goes to zero your free energy goes to zero you have a residual quantistic you have all the quantistic effects so it depends on the so in this case this is a free energy difference so it can but you have all quantum effects so here I have not shown this but if you go to very low temperature you see in this material the quantum effects at this temperature are not so relevant but for example in the case of hydrogen they are I mean this is all quantum here the effects are all quantum effects and you have I mean if you do for example thermal expansion you see that up to the by temperature the volume remain almost constant and then goes up and all the thermodynamic properties reflects this Any more questions? Can I have the speaker again? So we now switch topics to spins and magnetism and the first talk will be by Antimo Marazzo of the University of Trieste and the title is Digital 2D Spintronics with electric dipoles from spin FETs to ferroelectric topological insulators Can you hear me? Yeah. So thanks for the interaction let me just thank all the organizers for inviting me here today it's really a true honor and a great pleasure what I would like to talk about is actually two different projects the first one will be about modeling a novel 2D materials with strong Rashper spinobic coupling for spin field effect transistors while the second one will be about this ferroelectric topological insulators but these two projects as you will see they have actually a lot in common and they are in a sense two different ways control of the electron spin to manipulate spin currents in a digital way so we will just see that we need to care about the sign of a gate voltage so everything is either black or white 0 or 1, yes or no and with this digital control we can realize kind of spin electronics or probably should say spin orbitronics devices made of 2D materials and the way we are going to do this in both cases will be exploiting hopefully in a smart way in a very very strong electric dipoles so spin tronics has been around for a while the field is a little bit gigantic so today we will be concerned with two basic concepts that you see here the first one will be the spin field effect transistor as it has been put forward by Dat and Das more than 30 years ago and the other one is the concept of topological insulators so the spin field effect transistor is actually a very simple concept you have you have a spin current that is injected through this ferromagnetic contact here and that goes into a two-dimensional electron gas which is in presence of rush-pass spin orbit coupling now in order to have spin orbit coupling in an electron gas you need to break inversion symmetry the way you do this is through an external out-of-plane electric field which is driven by this gate voltage here so with this gate voltage you control the amount of rush-pass spin orbit coupling you have because of rush-pass spin orbit coupling the spin in the electron gas process so there is a precision here in the channel and then at the end you put another ferromagnetic contact and then depending on the relative alignment between the spins you can have different output signals and all of this can happen because electron gas with rush-pass interaction has this very nice bend structure here with the rich spin textures the spins winding around the center of the valley and in principle you control the rate of procession with the gate voltage so this seems very interesting but still we don't have a data-da spin-fet in our pocket after 30 years and in fact it might be actually a little bit naive proposal and there are many problems associated with realizing this device the one that I think is one of the most important I think is that you in principle would need very large electric fields to basically control this rush to have relative short channels and a device that you can consider to be sufficiently small you would need exceedingly large electric fields and to me the situation changed radically about 10 years ago when people like Arthur Freeman, Alex Zunger Zhang, Liu and Luo realized that actually what was essential to make a progress in this was there just in front of us but we were blind to it, we couldn't see it and in fact they called this effect the hidden spin polarization and I'll try to explain you this mechanism which is astonishingly simple and yet rather important so as I said to have rush for spin-over coupling you need to break inversion symmetry and people used to think that that meant breaking the global inversion symmetry in terms of the space group but they realized that actually spin-over coupling is a local effect it's an atomic almost effect so you only need to break inversion symmetry in terms of the point group so you can have a situation where you have a centrosymmetric material with strong dipole fields that compensate each other and these two components lead to doubly degenerate bands that have opposite spin polarization so you can have strong rush bands but they are doubly degenerate and the polarization sort of cancel each other and these two states are localized on one of these two separate sectors that you see here forming the inversion partners and in their work Zunger, Freemann and others not only propose this idea but they also made a list of materials and this is one of their materials it's made of lantanium, oxygen, bismuth and surfer and you see that it has a strong strong rush split band structure both in the conduction and valence band and it's doubly degenerate so we have opposite spins I think they use the green and orange color that are one, you know, they compensate each other and about the same time they propose this actually they also realize that you can use this effect this kind of materials to substantially improve the original proposal about that and that and you see it here now you have this strong rush split band structure but you cannot use it unless you break really now the space the global inversion symmetry and you do this now with a gate voltage again but a different advantage with the original proposal by that and thus, now the strength of the rush person coupling is entirely given by the internal intrinsic electric field that is in the material which in this case is very strong because essentially this material is made of by portions that are bound together by ionic bonding and now with this external field you just break the degeneracy between these two rush band you see there is a split splitting in energy and you can select which one of these hidden spin polarization you want to use when you dope the material in this field effect transistor so now here I try to show this in more general terms we take again this simplified model by that and thus but now our 2D material which is this type 2 rush for materials using the nomenclature by Zunger it has a strong rush between the coupling and we just use this gate voltage to split the degeneracy and decide whether we should have a positive gate voltage because one of the two spin textures goes up in energy and if it's negative the voltage goes down in energy so we can select which one we want to use so we want these centrosymmetry materials we want them to have local dipoles and of course we want strong spinoid coupling and so what we did was actually to try to search for other materials like this so we screened our database of easily exfoliable two dimensional materials and our best shot was this this candidate here it's made of Luthetium and oxygen it comes from an experimental pattern compound so it's a structure that's supposed to exist what we calculated with simulations is first that it's easily exfoliable it has a binding energy that is actually slightly lower than the one of graphene so we really think this should be easy to isolate as monolayers and it has a very strong rush effect just to give you a feeling how strong this number is if you want to convert this into a simple with a simple minded calculation into a how much distance it takes for this selection to flip the spin by 90 degrees it would be over the order of let's say one nanometer just to give you a feeling and this is the lowest conduction band you see there is a strong rush splitting this is valley number one and I'm talking about and why is there such a strong rush effect well you see here this is the potential profile there is a strong drop from the outer region to the internal region of the material and here there is basically a strong dipole there is another one here and they compensate each other and in fact overall the structure is centrosymmetric and so this as explained by Zunger and Freeman this valley is doubly degenerate of course and this minima are localized one is on the lower part of the material and the other one is in the upper part and now of course what we do is we put an electric field in the simulation so we put an autoplaying electric field and as expected what you see is that we break this and there is a splitting in energy between the two rush broadband but for the first time in the literature what we did was actually to dope the material to do real simulations with real electrons added and this was possible thanks to the methodological work done by T-Bosoye in his PhD thesis with Matteo Calandra and Francesco Mauri where he implemented the whole DFT and the FPT framework in a fill the fat double gate stuff so not only now we put an electric field but we also put electrons we dope the material and what we observe that the splitting which is the fundamental quantity we want to optimize here this energy splitting is actually reduced so it's lower than what would be otherwise without considering the doping and this was not explained at the time so we try to understand a bit why this happens and to do that of course we did first principle simulations but we also did a very simple electrostatic model that you see here where we have the top gate, the bottom gate that are represented as charge planes and then we have the material that is also represented as charge planes separated two planes separated by a dielectric constant now we have this band splitting S which is proportional to the internal electric field and it's made of two contributions this field it's the screen external field due to the gates and a sort of internal field due to the just charge planes in the materials because these two Rushwood bands as I said correspond to states that are localized in different sectors of the material so once you start to dope it the electrons don't go everywhere but they just go on one of the two sides basically and in fact this is what happens when you study the splitting versus this gauge charge difference which basically this external electric field this is done with dense function theory in the field effect setup but if you study that also as a function of doping you really see that the doping kills a bit the effect and it's even more clear when you consider the slope of this quantity there are two different regions clearly separated here so it's actually easy to understand this in terms of the model so if the doping is low and the electric field is sufficiently high as I said you just populate one of these two Rushwood branches and so you have a strong contribution coming from a charging balance you just put in charges on one side but then at some point you start to add electrons add electrons you start to touch also the other Rushwood branch which is higher in energy and this effect is not there anymore so you add charge basically on both sides and then you get to this other regime we call it low susceptibility regime and so this of course was useful to us to understand a bit the physics here but it's actually also useful to derive optimal operating conditions for this kind of devices because now you can do the reverse process you can put constraints on splitting to be larger in room temperature I want this my Fermi level to be in this position you can put a number of constraints and then you can derive what are the optimal values that you need to use to operate the device so to basically set the gate voltages so this was what I wanted to tell you about this work we did on spin field effect transistors and I will actually switch gear a bit but still continue to talk about spin field effect transistors as you will see but now we will not discuss things in terms of a volatile effect so we will try to move to a more non-volatile approach which is interesting for a number of reasons we will not do too much anymore computation discovery as I've shown you up to now but I will try to move to over the material design effort where we engineer an artificial heterostructures to have the desired functionality and still it will always be about this so now we will talk about the second work with it on ferroelectric van der Waals topological insulators so topological insulators are there are many kinds of them these days the first one to be realized in experiments and actually one of the first ones to be original proposed in theory is the so-called quantum spinom insulator and this is a topological phase that occurs in two-dimensional systems with time reversal symmetry so they are non-magnetic systems as described by this Z2 topological invariant is the invariant is trivial you just get that normal two-dimensional semiconductor and if it's non-trivial what you get is this topological phase with helical edge states you see them here crossing the bulky and these edge states are actually spin momentum lock you see you have opposite spins going basically towards opposite directions now these materials are interesting for applications because the properties here for instance the electron transport that occurs at the edges is low dissipation or we expect it to be low dissipation because time reversal symmetry forbids elastic best scattering and then as I said these edge states are spin momentum lock which is of course something very interesting if you want to make a spin tronic device and all these effects are robust because they are protected by the topology of the bulk electronic wave function meaning that if you now start to consider disorder, impurity it doesn't change and of course since the very early days of the field people try to come up with devices you could make out of these properties out of these materials the simplest device is always you always think about the field effects resistors and indeed people try to propose this very simple device here and just showing one of the many examples you could find the theoretical literature here on the right-hand side the experimental counterpart the idea is very simple you take these 2D materials they are topological you put them under an outer plane electric field and then if the field is sufficiently strong you basically close the gap and reopen it in the trivial phase and so you can switch on and off the presence of these edge states that are potentially low dissipation and spin momentum lock but the the feature of this is that it is volatile but what we want is actually to have a similar effect but to be non-volatile and if you want something non-volatile well you typically think about ferroelectrics so we wanted a ferroelectric material that is a bit special because we want the polarization direction to be coupled to the topological invariant so for one direction it is topological for the other direction it is trivial and if you think for a second or for more than a second this is actually very unlikely to occur in a single bulk material or if not impossible to occur so the question was can we actually make a non-volatile topological field effect transistor and I think the question is yes and today I will talk about our own work on the subject that goes back to PhD thesis some years ago and then some later follow-ups but I really want to mention that there have been also other independent related efforts all around the globe that kind of elaborated on similar ideas and more recently I see appearing more and more works on this so what we showed you are gaining momentum for now it is only among theories and simulation specialists but I try to convince you that the ingredients are so basic are so simple in general that I am confident that at some point in the near future the experimentalists will start to pick up on this now I said we wanted to have a ferroelectric TI but it was not really that it didn't work that way it was the other way around we went inspired to try to have something like that where we screened for novel quantum insulators with high throughput computational methods and among the few candidates we found at the time the worst performing really the worst we got was this material here made of indium sulfur and zinc that was terrible it had a small bending version almost no indirect gap but it had a nice feature that it was essentially made by two separate monolayers so it was a single 2D material exfoliable from other layers but still it was made of two sectors one is indium sulfide and the other is zinc sulfide and what is interesting is that this material if you consider that as a stand alone material is actually a ferroelectric so we started to ask ourselves questions like what does it happen to the band structure if I now switch to polarization would it change, I mean if it is just indium sulfide of course it doesn't change but now it's a different there is an interface with another material we started to think about this and at the end of this thinking we actually came up with a revised version or a visited version of the famous bending version mechanism so for those of you who have never seen this so the bending version is what is basically driving a most apological phase so it's actually very simple there is two bands they typically have different orbital character it can be let's say an S-like band and a P-like band for a number of reasons one is the bending or maybe just crystal field in materials that can be inverted and now if there is pinnobby coupling what happens is that this degeneracy point are not there so there is a band gap so the overall the system is a semiconductor and it has this very peculiar band structures around the Fermi level what we did was actually now to assign this two bands not to a single material but to two different mono layers one is this mono layer this one a layer the other to this other mono layer and now we put this two mono layer next to the other and this is basically a kind of curve thanks to van der Waals interaction they would just stay together and now if there is pinnobby coupling well you might actually open a gap and have this kind of inverted structures that might lead to apological phase so this is interesting I think but you can do more than that because now you can play with the nature of these two mono layers and for instance you can now take a mono layer to be a ferroelectric so you consider the interface between a ferroelectric insulator and a trivial insulator and now because this is a ferroelectric with an out of plane polarization depending on the side that you look at you might have let's say different vacuum levels so actually what is different is that the band alignment between these two mono layers so in one case if you consider one side you might have a band inversion so you can have bands crossing each other or in the other case you can have just a trivial semiconductor and this realized exactly what we were trying to achieve you see in one case for one polarization direction the system is trivial in the other case the system is topological so it has this non-volatility we looked for but of course we started with this ternary compound that was not good at all because it was terrible quantum insulator so what we wanted to do was to optimize this and so we took as our reference ferroelectric induced selenium which is very well studied among the most famous I think two-dimensional ferroelectric that has the conduction band bottom at gamma and we looked for a partner for this and we screened materials databases not only our own one on the materials club but also the one by DTU that you find online and we tried to find a semiconductor that was compatible and had the top of the valence band at the gamma point and he had to add the correct band alignment to show both trivial and topological phase depending on the polarization direction we show you the simple condition for this in the next slide for the moment I just show you the result of this screening and I will not explain the details of this but just to tell you that these are all partners possible partners the ones that have the solid bar in this window here are very likely to show this effect and the message is this is not rare because in the end we are starting from trivial semiconductors I mean they don't have to be topological they come topological when you put them together and so this is there are many candidates and what I will show is what me and Marco Gibertini decided was best for us to show this effect as an example it was good for simulations but definitely I think there are better combinations out there that waited to be found and especially from the point of view of experimental work but so what we decided there was an optimal couple it was indium selenide and copper iodine actually because copper iodine is lattice matched so simulations are easier to do but also because they are both easily exploitable so we propose something that should sort of make sense to do in experiments you see the two band structures here and the condition that I mentioned before is that the conduction band of the selenide and the valence band of the other material needs to be closer the difference needs to be lower than half the potential drop due to this ferroelectric polarization but of course things are slightly first of all it works so when you do the real interface you simulate it with the FD it works you have in one case a 50 millilectron volt bandgap a quantum insulator which is ferroelectric in the other case a trivial ferroelectric but things are more complex than what you would expect by just looking at the two monolayers isolated far apart so in the trivial phase you get the bandgaps lower than what you would expect because there is a strong interlayer function and in the topological phase the polarization is slightly lower so now you get a material and a structure where not only you have different polarization direction with different topology but you have different magnitude in the polarization strength and this is going towards the end we wanted to understand a little bit why this happens because spinobicoupling is again as I said typically a local effect but here the two bands sit on two different monolayers we started things like the bandgap versus the interlayer distance you see it increases a lot if you push together the two material we started we built a simple later coaster type bonding model inspired by a maximalized function of Newtonian to describe things in simple terms around the Fermi level and you see again the bandgap increases strongly with the interlayer coupling term that is present so what we understood at the end is that yes the gap is driven by spinorbit but it's mediated by the interlay coupling by the hybridization between the two monolayers and if you look at the numbers you see here that the gaps converge saturate at some point to a value that is proportional to the strength of spinobicoupling these mechanisms is such that the strength of spinobicoupling is almost entirely or it can be entirely transferred to the bandgap it's a very efficient way to open a bandgap so you can imagine materials where spinobicoupling is stronger because there are more heavier chemical elements and you might find bandgaps that are even larger than the ones that are shown and 50 mL is more for a semiconductor but it's actually not so bad for a topological insulator and finally all of what I've shown is resilient to the twisting angle so we rotated we twisted the two materials and the band structure is the same so the effect does not depend on the alignment or on any kind of architect it's just about the band alignment between two materials you need to have proper band alignment between a ferroelectric and a semiconductor so this is what I wanted to say for today and let me just acknowledge all the people who were involved with this you see here Ron Zang, Tibosoie, Nicola Marzari, Mathieu Festrate and Marco Givertini all these two institutions who supported our work and also praise who gave us basically the fuel to run all the simulations you've seen and before I stop two advertisements, the first one is two posters the first one is by Marco here all this work we did together on this ferroelectric topological insulators but also other work he did on a similar topic and then there is another poster by Roberto Favata PhD student here in Trieste about how to calculate actually Z2 invariance when you have large supercells and disorder systems and finally with Maria Peressi we might have still a two year poster position if you're interested in first principle modelling of 2D materials and intertombing potential based on neural networks if you're interested really drop me an email or just talk to me but you need to do it today because we are the position is being closed and with this I thank you for your attention okay, thank you very much question thank you for your nice talk I have a question concerning rush by effect I want to know if graphene can also be considered as a material that present this effect because it's benga present this semi-metal beaver and or if there is a need to drop it in order to slightly open the gap and then be considered as presenting a rush by effect do you refer to the first way the first way let's say this one here something like this actually you can't really use graphene to do this because you really need this special symmetry you really need this type to do rush by material so you need to have something symmetric and that would be okay for graphene but you need to have this strong dipoles you see this this pin layer locking fat it's really based on this so you need to have this kind of much structures you cannot do this in graphene you can do other spin orbitronic devices but they're not based on this idea yeah, nice talk so I was wondering exactly on the same thing the bands showing the raishpa are they iodine derived are they? from iodine are they from iodine? iodine, ah, yeah so this is a good question so you had this L-U-O-I-O well, yeah, they are located on iodine yes, so this part of here are iodine this is lutexium, the green one and the red one is oxygen, yeah so they are located there so yeah my question actually was is there a role of the so I guess they are the p orbitals of iodine well, yeah I mean I don't remember all the denser states what my question more precise is is there any role of orbital degeneracy of the p's I don't know what was the crystal symmetry well, I mean in a sense, yes, it's all about the side symmetries you know, you need to have the local the correct side symmetries to show a rush of effects so you can, for instance, what you could do is you can screen materials without doing explicit calculation but just looking at the side groups and thinking about what are the right side groups because the reason I am asking it because recently there have been papers which actually show that if there is an orbital degeneracy so then actually there is these crossings are more like anti-crossing and the rush effect can be very severely enhanced okay, no, that might be very interesting we did say the, you know, the hard way, so we just actually look for the strongest actually what we did was, we didn't look for this type 2 we really look with Rong Zhang who did the screening we really look for the largest rush of splitter we could find and it happened that the largest splitter we could find corresponded to a material that was centrosymmetric so it was really a certain deep due so to say okay, thanks for the nice talk you might have mentioned it and I just missed it but what functional did you use? okay, so whatever you have seen up to let's say, up to here is DFT in standard PBE and GGA yeah, we have done a DFT plus U calculation to check that the F-electrons don't really interfere for this part of the talk in the next part of the talk we actually have done also, I mean the equation is just DFT and of course no local function to get the equilibrium structures for the trostructures but we also done just give me a second, yeah we also check what happens if you consider hybrid functional the situation is actually complex for this trostructure in one line if you do things with hybrid functional you still see this, in the sense that a topological trostructure in one case and not in the other with GW we couldn't do the full trostructure, we could not afford it we did just the the isolated monolayers and there actually GW would not predict to have topological the problem with this is that if you want to study trostructures, you don't really want to use things like G0 to 0 because you lose all this all the charge transfer all of that would remain at the DFT level because it's just one shot so we prefer actually to do hybrid functions here because at least you consider all this charge transfer all these self-consistence effects thank you I forgot to move to last I thank you for the nice talk so my question is when you were talking about the twist angle and robustness to twist angle if I understand correctly you kind of got a few snapshots of one that were computationally tractable which probably don't have very long moire length scales I was wondering if there's a way to show it's tractable at like I mean I know that's a good point in this specific case we think there is no problem because they are lattice matched so the idea is that if they are lattice matched there shouldn't be any real moire of course if you rotate things anything can happen but I would say I don't expect any moire physics to occur here but in general if you consider a general structure of course there can be moire and that can change a lot I mean topological superconducting it changes the entire physics but what is nice about this is that the bending version is at gamma as in this case by the way we actually think that even if there is some moire I mean of course it depends on the numbers but I would be confident that even with some moire modulation some moire patterns the fact should probably be there so it should be at least I think more robust than the moire strength thank you I think in the interest of time we'll conclude this session thank you Antimo both speakers for sticking to time and we'll now break for tea and we'll come back at 3.40 thank you I need to switch this off ok so welcome back to the last session of this afternoon before the posters obviously my name is Ralf Gebauer I'm here from ICTP and I will chair this session and we continue with the topic we have already started meaning magnetism and spins and all this and we are very happy now to have Sophie Weber from ETH in Zurich and she will talk about surface magnetization in antiferromagnets from symmetry considerations and first principle calculations thank you very much for having me so today I'll be presenting some of our most recent results from DFT on surface magnetization in antiferromagnets so since I'll be talking about antiferromagnets or AFMs for the next 25 minutes I'll first remind you from a big picture level some reasons why they're interesting or at least for our purposes why we care so in the field spin electronics traditionally fair magnets were the material of choice and this isn't surprising because they have a bulk magnetization that can be measured by direct methods in a lab and manipulated with external magnetic fields however more recently AFMs have emerged as an alternate material in spin electronics even though by definition they have a vanishing bulk magnetization they still have a bit like quantity in the sector or staggered magnetization which can be serviced a bit like quantity to store information and can be manipulated and read out indirectly for example by anisotropic magneto-resistance if we want to go back even farther for many decades an AFMs have been a key component in the read heads of magnetic recording devices where a phenomenon called exchange bias is leveraged whereby the exchange coupling between an AFM interface with a fair magnet fixes the direction of the magnetization of the fair magnet so AFMs are already useful the presence of surface magnetization enhances this utility so as I said before AFMs have zero bulk magnetization however this does not preclude the possibility of four bulk AFMs with certain symmetries which I'll get into what I mean by that later it is possible to have a finite magnetic dipole moment per unit area on the surface which I'll call surface magnetization because it's easier to say and this is easy to see for the 001 surface of chromium which is a prototypical AFM which I'll be focusing on for the first part of this talk so you can see just by virtue of cleaving the surface and introducing a vacuum you get this layer of chromium spins and one of the cool things about surface magnetization is that it is a directly detectable probe of the AFM domain state because it's the direction of the surface magnetization is directly connected to the bulk underlying AFM domain so in this case if your moment is up it corresponds to a bulk AFM domain with nail vector plus one if it's down you have a bulk AFM domain with nail vector minus one so now you don't have to rely on indirect methods to detect the state of your AFM domain and this surface magnetization is actually a quantity that can be measured in experiments the most common method is nitrogen scanning vacancy magnetometry it shows up in DFT if you do a slab calculation or a slab geometry of chromium in the 001 direction you get this spin polarized state and another application is thought to be an exchange bias which I talked about before so this was an experiment by the Bennett group where they switched the direction of surface magnetization in 001 chromium and when they did this the direction of the exchange bias field shifted or switched signs so it's not fully understood but surface magnetization is believed to be connected to exchange bias now, magnetoelectric materials are a subset of AFMs which are particularly relevant of surface magnetization just if anyone needs a reminder for materials which have the linear EME effect they must have broken inversion and time reversal symmetries in their bulk and as a consequence of this if you apply an electric field you induce a bulk magnetization in the compound conversely if you apply a bulk magnetic field you induce an electric polarization and the proportionality is governed by this linear response tensor alpha now it turns out that the symmetries required for non-zero linear EME response are intimately related to these symmetries that are required for non-zero surface magnetization and we can see this from kind of a qualitative picture by considering that an electric field and a unit vector normal perpendicular to a surface are both at their core just polar vectors so in the presence of either of these polar vectors we can expect the symmetry the bulk symmetry to be reduced in the same manner so this has the consequence that if you have a surface whose normal is parallel to an electric field we expect the surface to acquire the same magnetization components as the bulk if the electric field were applied in that direction another reason to focus on EME AFMs is that the direction of the surface magnetization can be controlled and manipulated by a magneto-electric annealing so there is this term in the free energy of when you're EMEs that's proportional to the product of electric and magnetic fields times alpha instance alpha switch is signed for opposite AFM domains that means that by applying either anti-parallel or parallel magnetic and electric fields you can preferentially select the state of the AFM bulk domain and as a consequence the direction of the surface magnetization and this is in contrast to non-ME AFMs where you usually need a prohibitively large magnetic field to switch surface magnetization so in the cartoon picture I showed looking at the 001 surface of chromia it was relatively straightforward to believe me when I said that there was a non-zero dipole moment pre-init area because you just saw a layer of magnetic spare magnetic spins however in more subtle cases where the reduction of symmetry by the surface actually leads to a finite surface magnetization even when in the ground state it appears to be magnetically compensated so for example in the case of minus 1 to 0 chromia which is perpendicular to the 001 surface if you look at the chromia atoms here it appears that they're going left right left right such that this surface should have a magnetization of zero however and I'll go into it more it turns up that the symmetry of the surface is such that in induced spare magnetization is allowed once you create the surface and this occurs by a canting of the surface moments and this is also possible in non-ME AFMs and centrosymmetric materials for example iron dipolaride for a long time people were confused about this material because it has a very large exchange bias when you interface the 110 surface with some ferromagnet and if we look at the 110 surface the in-plane iron moments look to be magnetically compensated so there's no surface magnetization however again because the reduction of symmetry at the surface causes these sublattices to become an equivalent the magnitudes of the moments can change such that you actually get a net surface magnetization so that's an introduction to this topic in case you are not so familiar with it and this is the outline for the rest of the talk I'll first give straightforward group theory formalism which allows you to identify bulk AFMs and their particular surface terminations which allow for a finite equilibrium surface magnetization and this has been developed in these previous papers here so we're really just expanding on it a bit then I'll focus on chromia for the first part of the remainder of the talk we'll look at the uncompensated 001 surface magnetization and its temperature dependence as well as inducing magnetization on the minus 120 surface and then we'll also quickly go into cases of surface magnetization in non-ME AFMs so here's the procedure so we're just going to focus on colinear AFMs for this entire talk so at first we have some AFM with a bulk magnetic space group let's call G and it contains the point group operations R which may or may not contain time reversal and the spatial translations now let's consider a Miller plane with a normal vector perpendicular to this plane and this is our surface of interest to see whether the surface has we expect an equilibrium magnetization on it so what we do is find the surface magnetic subgroup and the way that that's defined is you consider all of the operations of the bulk space group G which basically leave your surface normal in invariant modulo translations which are parallel to the surface which I tried to depict in this cartoon and we exclude symmetries which involve translations perpendicular to the surface because our translational symmetry is broken in that direction so once you have that subgroup then we just consider the point group associated with that so just the point group operations are in and we just check we can look up and build out whether this group is compatible with fair magnetism there are 31 of these groups that if you act all of the operations on magnetization pseudo factor at least one component is left invariant and if the subgroup is compatible it has an equilibrium surface magnetization if not it has no equilibrium surface magnetization so let's work through some examples so we'll go back to the case of chromia which is magneto electric material with a lot of interest in applications because it has relatively high nail temperature of 300 Kelvin so it has a bulk magnetic space group with broken inversion and time reversal symmetries and the moments are polarized along the 001 direction now if we look at if we want to consider the 001 surface we consider the operations which leave this surface normal invariant and if you work through all of that I won't go through the details you get this magnetic point group 3 and if we look at what these 3 operations, these 3 fold rotations about 001 and the identity due to the magnetization components the only thing that you have to pay attention to is that the MZ component is the only Cartesian component which doesn't switch sign under any of the operations so this means that this surface is fair magnetically compatible and the fair magnetization the fair magnetism has to occur along the 001 direction or the direction of ground state spin polarization and indeed there have been multiple experimental verifications of this roughness robust surface magnetization in 001 chromia for the case of the magneto-electric AFMs there also exists a convenient bulk descriptor which we can use in order to give a quantitative estimate of the surface magnetization in terms of more magnetons per unit area and this comes from higher terms in a multiple expansion of the interaction energy if you have a magnetization density mu of r interacting with a magnetic field and you do a multiple expansion the first term is your normal magnetic dipole and then the second order term which involves the derivative of the magnetic field and a spatial component the coefficient of this is called the magneto-electric multiple and because it breaks both in version and time reversal symmetries it's non-zero and AFMs and there's a concept you may be more familiar with is in the modern theory of polarization there is a elegant connection between the bulk electric polarization and the amount of charge on a surface which is perpendicular to polarization and if you indeed look at the units of volume normalized polarization it's charge per unit area analogously if we look at the units of multipleization normalized to the unit cell it's four magnetons per nanometer squared so analogously we can say that this multipleization tensor corresponds to the amount of surface magnetization we expect on a surface normal parallel to I the spatial component and magnetic dipole parallel to J so the reason one of the reasons my advisor, Nicola first got interested in this was that there's a surprising discrepancy between estimates of the 0, 0, 1 chromia surface magnetization using nitrogen vacancy magnetometry and what is expected based on this multipleization expectation where I forgot to mention this, we just rather than the central form we just if the magnetization is fairly localized you can approximate it with this point assuming that the you just have point magnetic dipoles centered at the atomic sites and you sum over the magnetic unit cell so this is what we're using so anyways what you get with theory does not agree with experiment but when we got to thinking more about this we realized this is not so surprising you can from a mean field argument you can say that the temperature at which a given magnetic moment orders is roughly proportional to its effective Heisenberg coupling which you can define just by summing over all of the nearest neighbor Heisenberg's for a given sub lattice so if you look in the bulk based on our, so there's five nearest neighbors in chromia that are relevant and based on our DFT calculations this bulk effective J is around 40 mE volts if we look at the electrostatically stable surface of zero zero one chromia you see that a bunch of the nearest neighbors are just cut off so by virtue of the termination the effective J even if we don't consider surface relaxation is 2 mE volts so more than a factor of 8 smaller so it stands to reason that at room temperature around which the experiments are being carried out the magnetization of the surface sub lattice is going to be essentially paramagnetic even if the bulk nail vector is ordered and this is what we get based on Monte Carlo simulation so this is an agreement so but we can still rescue this quantitative estimate using this multipleization bulk quantity as long as we just use a different basis so as I said before if you to calculate it using this point dipole approximation we if this if we use the unit cell corresponding to the electrostatically stable surface we get 12 4 magnetons per nanometer squared give or take however remember we just said that the magnetization of the top most chromia doesn't actually contribute at the experimental measurements so a more appropriate basis in order to capture these surface magnetization is basis we get by taking this moment and translating it down by one C lattice vector and if we recalculate using this we get a value of around 2.4 magnetons per nanometer squared and this is actually in quite good agreement with estimates from nitrogen vacancy magnetometry okay so that was the sort of intuitive cases of surface magnetization so let's go to the slightly more subtle cases so I'll stay with chromia for now and we'll go to a surface that is perpendicular to the 001 direction so the minus 2 bar 0 has a surface normal which is parallel to a 2 fold axis and again in the ground state polarization with the moments all directed along 001 this surface is magnetically compensated however if we look at the magnetic point group of the minus 1 2 0 surface which is the point group 2 which is just a 2 fold rotation the magnetization component which is left invariant is the one which is parallel to minus 1 2 y which is the surface normal so that means that fair magnetism is allowed on the surface and it has to occur by a canting of the moments along the surface normal so this is all well and good from a symmetry perspective but in addition to symmetry arguments it would be good to see whether we could expect this to actually occur in a real material so to that end we did DFT calculations these are constrained magnetic calculations in VASP using a slab of minus 1 2 0 chromia so what we do is we fix the direction of the 2 center chromia layers along the ground state 001 direction and then we can't the moments of the 2 surfaces with the canting angle with respect to 001 axis and even from a zoomed out picture you can see already that this energetic parabola is asymmetric with respect to 0 and if we zoom in the energy scale is really really tiny but based on multiple tests that I can talk about if you're interested and we think that this energy lowering is real so what this means is that this surface in DFT is stabilized energetically for a non-zero canting or induced surface magnetization of about 0.254 magnitons per nanometer squared okay so in the last few minutes I'll briefly talk about extensions of this surface magnetization idea beyond ME AFMs so I talked about this group theory formalism before which allows us to identify surfaces that are compatible with ferromagnetism meaning they can have an induced surface magnetization however something I swept under the rug is there's a subtlety between cases which are roughness or robust meaning even in the case of surface roughness or atomic steps the sign and magnitude of the surface magnetization stays constant or roughness sensitive cases where you only get surface magnetization for an atomically pristine layer and the way you can check which one you have is so once you have your surface magnetic point group you go back and check whether in the bulk magnetic space group the higher symmetry bulk magnetic space group whether there are any operations that take the combined vectors so the surface normal and the magnetic pseudo vector and translate the surface normal perpendicular to the surface and switch the direction of the magnetization vector if you do that means that you have surfaces that occur on atomic steps that are symmetry connected or energetically degenerate but they have opposite magnetization so it will average to zero for a macroscopic area if you don't have operations like this so in the operation which translates your surface normal to an atomic step keeps the direction of magnetization then like the case of zeroes or one chromia you really have a roughness robust surface magnetization which is more useful for device purposes so an easy example of this more trivial case is the rock salt nickel oxide so if you look at the 111 surface just looking at it you can tell me that there are ferromagnetic layers of spins polarized in plane however in the bulk magnetic space group which is the type 4 anti-ferromagnet so it has time reversal in its symmetry you have a symmetry which is time reversal multiplied by a translation along the 111 direction so that's just the operation that takes this moment to this moment so what that means is that these two surfaces which are symmetrically equivalent but have opposite signs of magnetization will exist in the material in the presence of roughness so 111 nickel oxide would not have a roughness robust surface magnetization ok and finally I'll go to the centrosymmetric case and I'll go back to iron-difluoride in addition to its interest as a exchange bias material it's also gotten some attention recently as a candidate ultramagnet one of these AFMs which has been splitting along certain directions of case space so it has a couple of cool features but anyways we can do this formalism again and this both magnetic space group has inversion symmetry but broken time reversal symmetry and if you look on the 111 surface again the moments are polarized in plane and pointed along opposite directions and it looks to be completely magnetically compensated and if you go through the magnetic point group formalism it's 2 prime and M prime it goes through all these operations the one that I've underlined the MY component stays invariant and in the basis of this point group Y is parallel to 001 so this surface is again paramagnetically compatible and the component is allowed along the 001 bulk direction so this is in contrast to chromia where the paramagnetism was induced by a canting away from the bulk ground state direction in this case the iron moments have to stay aligned collinearly in the 001 direction and the only way that you can get a finite paramagnetism on the surface is by the magnitude of the moments becoming an equivalent so we want to check this again in DFT so these are not constrained calculations these are just self consistent calculations neglecting spin orbit coupling we don't need it we're not checking canting we're just looking at moments magnitudes so this is a slab of 110 iron diphyloride and all we do is look at the projected magnetic, site projected magnetic moment as a function of layer for iron diphyloride and we look at the two in equivalent iron sublattices center and corner refer to the positions of the sublattice in the volcano itself and they're not the same the key takeaway is if you if you sum up the sublattice moments to get the total magnetization per layer in the center of the slab it's roughly zero whereas for both surface layers you get a non-negligible magnetization of about .014 magnitons so it's very small but it's there alright so that comes to the end of my talk in summary we've presented a group theory formalism which allows you to distinguish identify AFMs with surface magnetization and distinguish between these different cases induced uncompensated roughness robustness sensitive we've talked about the cases 0s or 1 chromia and how in that in the case of AFMs you can describe or estimate the quantitative value of surface magnetization by a bulk multiple as long as you account for temperature dependence and use the right basis and we've also shown through DFT calculations that in agreement with symmetry expectations the creation of a vacuum terminated surface means that if it's that you get a stabilization if you induce a non-zero magnetization on these surfaces and overall I guess the key takeaway point is that this surface magnetization is probably more ubiquitous than I think a lot of people realize even though it's a concept that has been studied for a long time it's kind of on the back burner and I hope this influences people to study it more I'll just acknowledge my first and foremost my advisor Nicholas Baldwin at ETH Andrea Urru who is also at the conference and Shine Ticabelle who contributed with discussions and calculations and also the materials theory group at ETH Thank you Thank you very much Sophie for this nice talk about surfaces, symmetries, magnetizations Thank you for the very nice interesting talk. I had one question about the terminations of chromium surfaces. Yes. Especially you have shown in case of disorder that there's a different termination of the surface I'm wondering if you consider if there are why this specific one. You're talking about this Yes. Yeah sorry I didn't explain this properly the experimental termination of chromium is known to be the one on the left-hand side so this I mean there are some controversies about the possibility of some disorder but for sure with electrostatic arguments this is the electrostatically stable surface and this surface does not occur naturally. The point is that when you calculate this multiplication what you do is you take you choose a bulk unit cell where if you take this bulk unit cell and tile it periodically when you cut it off at some point it sort of defines your surface so it's basically the basis that defines the surface termination so you have to choose this bulk unit cell and you choose that according to the surface termination and if you use this bulk unit cell which corresponds to the actual termination you get a magnetic you get a magnetization that is much larger than what is found in experiment but so this is only just a theoretical construct so this is all I'm saying is that this is the bulk unit cell that is more appropriate to calculate the experimental surface magnetization at room temperature because at room temperature this magnetic moment is disordered or essentially paramagnetic but the calculation is done for a bulk or for a slab then the calculation is done for a slab okay, other questions thanks for the interesting talk I have a question regarding to when you construct the different magnetic equations for the surface state like you have this especially when there's the like some of them they are counted I was just wondering how did you consider if there are spring spring correlations or when you do calculate the energy would you allow for relaxation of these spin states because you're talking about the this slide here? yeah, for this one so you're asking whether I just relax rather than doing constrained calculations relax the material and see whether I get what I get basically? yeah, I think especially for this one I'm just curious about when these spins they are pointing to different directions and then how do you consider the property that would have different size or a different effect between that size because the orientation of the spins they are different and how at this effect taking into consideration sorry, I'm not quite sure I understand your question so this, yeah, so for sure you're right the energy changes as a result of I mean you get counting and you get this yeah, I mean you get Sealo-Oshinsky-Moria interactions and such because the spins in the center are still pointed towards 001 but I guess we didn't worry about sort of the different spin correlations that were affecting the energy change all we wanted to see is whether the energy lowered if we induced a magnetization on the surface so just wanted to make sure so all these spin directions they are fixed, right, during your calculation they are fixed, right, yeah thank you anyone else has some question no? Okay, so thank you very much again okay, so this is set up, okay, so the next talk again about magnetism is by Anita Halder from Trinity Ecology Dublin and the topic is tuning the magnetic anisotropy at metal molecular interfaces, so please so good afternoon everyone, I'm Anita Halder I'm a post-op in Trinity College Dublin the title of my talk is tuning the magnetic anisotropy at metal molecular interface let's start with thanking my advisors in Trinity College Dr. Andrea Ducati and Professor Stefano Sanvito I also want to thank my collaborators Shumantho and Professor Devidoregan sorry also I want to thank the organizers here to give me the chance to present my work let's start with defining magnetocrystalline anisotropy so it's basically described as the dependence of the energy of a magnetic system along some certain direction that is the preference along some certain magnetization axis so this is basically a pure relativistic effect and coming from spin orbit coupling this certain axis is called the easy axis the easy axis can be in plane or out of plane that is the perpendicular to the plane and the perpendicular magnetic anisotropy is key ingredient in technological applications for example in permanent magnetism memory devices and the design, fabrication and characterization of outer perpendicular anisotropy have drawn lots of attention of scientific community if you think about the memory devices the recording density has been increased several order of magnitude compared to the first time when it was launched and the efficiency you can to increase that kind of efficiency the two requirements are one is you need high saturation magnetization you need strong out of plane anisotropy and actually you can define kind of hard magnet, semi hard magnet or soft magnet depending upon the value of its K1 and saturation magnetization now you can see from this plot that most of these magnets are transition metal and rare earth intermetallics the transition metal part is giving you the high saturation magnetization where the rare earth part is responsible for high anisotropy and there is some time grown in thin films to enhance the perpendicular anisotropy but with the flame thickness it is seen that there is a significant separation in the out of plane anisotropy so is there any alternative to tune that perpendicular anisotropy what is about metal molecule interface organic molecules they are already quite popular in spin tronics for 3D transition metals if you consider their thin films mostly the surface surface property dominates and so if the 3D transition metal in its bulk form if you consider its magnetic anisotropy there is almost no anisotropy or even if there it is very small now if you consider a surface due to reduced symmetry there is enhancement of perpendicular anisotropy now what is about if you are putting some molecule on top of the surface actually indeed this works and in last few years there are lots of investigation around this reduction I just want to highlight few of them this PRL paper they actually talked about cobalt C60 they deposited cobalt C60 on top of HCP sorry they deposited molecule C60 on top of HCP cobalt and they found enhancement out of plane anisotropy the same group they also extended their study they did it for another molecule and they also reported enhancement in perpendicular anisotropy then this paper if you follow they made a hydro structure of cobalt and graphene and there is massive enhancement of the perpendicular anisotropy has been reported also this paper similarly they reported this two molecule C60 and ALQ3 deposited on cobalt layers and there are many more actually the same thing they made some hydro structure with graphene cobalt and heavy metal and also the magnetic heartening that is you can strengthen the magnetic exchange interaction by putting non-magnetic organic molecule so what is the scope of this work so till the date the studies are mostly based on the qualitative understanding between DFT and experimental findings but to understand the correlation between interfacial magnetocrystalline anisotropy and molecule metahybridization a detailed analysis is required so that can be done with the systematic investigation on model system so in this study to fill up this gap what we did we studied the modification of magnetocrystalline anisotropy of HCC cobalt thin flames by putting different non-magnetic molecules we used a simple simple model to understand the basic physics and then also we quantitatively from first principle DFT calculation we calculated the modification for different prototype typical systems with that I came to my results at first I will start with a very simple system let's take the simplest molecule probably ben sorry benzene now I considered 5 layers of cobalt HCC cobalt and I put a benzene molecule on top of it and then I relax the system I constrain the last 2 layers and this is the geometry as you can see and then the first thing I wanted to see what is the change in electronic structure if you put molecule on top of cobalt surface so the black one here in the density of state the black one is for the bare cobalt as you can see that the majority spin channel is completely occupied for all of them but the minority spin channel is partially occupied now if you put molecule on top of it this orbital the dj squared orbital is modified quite a lot specially you can see there is a formation of bonding bonding anti-bonding state between c, p, z and cobalt dj squared orbital the anti-bonding states are mostly here and they are spending over wide range of energy and they are mostly dj squared character and the bonding state are over here and they are mostly cobalt p, z character next we calculated the anisotropy energy to calculate the anisotropy energy we took the energy difference between total energy total band energy difference between 2 different direction and then we found that for FCC cobalt bulk if you consider bulk there is no anisotropy now if you make some slab we considered 5 layers it shows negative anisotropy that is in-plane anisotropy and then upon molecular adsorption the anisotropy is drastically reduces to minus 0.6 from almost minus 5.5 and in the next step I removed I removed the benzene molecule from here and I didn't optimize the structure this is some kind of a distorted cobalt slab and I calculated the anisotropy again and I found it became again in-plane very similar to bare cobalt so that tells me that the drastic reduce in the anisotropy is basically due to molecule metal hybridization and the atomic displacement of the of the top like the top layer has a very minor influence next we calculated the layer-wise anisotropy and that can be obtained from that delta E SoC delta E SoC you can calculate delta E SoC from the SoC matrix element and if you sum over orbital you will get the atom-wise value and then you have to take the difference between two different directions and that is proportional to your anisotropy of a particular atom so if you see this plot the red one is for bare cobalt and you can see that for all the layers for all the atoms it has like negative anisotropy so that is why in total and for bare cobalt also it prefers in plane now if you put a benzene molecule on top of it the atoms which are particularly connected to the molecule they are showing an out of plane component and some of the atoms in the second layer also I indicated here they are also showing out of plane anisotropy so they are actually cancelling the in plane anisotropy of the inner layers and as a whole the total anisotropy is very close to zero for benzene cobalt interface next also we plotted the transition matrix element between different d orbitals and if you see for bare cobalt the strongest negative anisotropy is coming from dyz and dz square orbital and if you put cobalt benzene atom on top of it that quantity is going to zero now to understand the basic physics we model it in a very simple way from second order perturbation theory we can write magnetochrystalline anisotropy like this where psi o and psi u are corresponding to occupied and unoccupied orbital and then you can actually they are promoting in plane or out of plane anisotropy you can calculate it analytically we consider our d orbitals like the energy levels of the d orbitals in that way just I want to mention that since our majority spin channel always always like filled up so we didn't consider it in our simple model crystal field split it in that way which is very similar to whatever what the result we obtain from the f t calculation these two levels are degenerate again these two levels are degenerate we consider it in our model and then if you calculate the total anisotropy it is something like that and the negative sign shows that already from our simple model it shows in plane anisotropy which is coming from our d f t calculation as well now when you are putting benzene on top of cobalt what is happening due to hybridization there is bonding and anti bonding states are forming now the anti bonding states I mean now the difference between from this orbital to this new anti bonding states are delta prime now if you calculate the emca it becomes this now depending upon this value of delta prime it can be positive or negative now if your metal molecule hybridization is not that strong it's very weak then it will be like the total quantity will be negative it will be in plane but if you have a very strong hybridization there is a chance of switching from in plane to out of plane anisotropy next we kind of relate our calculated orbital moments with a simple model called Bruno model what is Bruno model so Bruno model basically relate the orbital moment with anisotropy with total anisotropy so the easy axis of magnetization coincide with the direction of maximum orbital moment and this is exactly what we got from our d f t calculation so if you consider the red dots here these are for bare cobalt and for bare cobalt you can see that the in plane orbital moment is greater than out of plane orbital moment and that's why it prefers in plane anisotropy now upon benzene adsorption the atoms particularly which are attached to the molecule they showed a sharp drop for in plane orbital moment and that's why they prefers out of plane anisotropy then we also compared the anisotropy obtained from Bruno's model and whatever we obtained from our d f t calculation and it shows a good agreement though there is some quality quantitative difference between bare cobalt but for benzene cobalt system the agreement is quite good next after understanding the basic physics of metal molecular hybridization and how it is affecting the total anisotropy we would like to study how the chemical reactivity of the molecule affecting the total anisotropy so we have considered a different molecule which is more reactive than benzene so it has a larger hybridization and that is reflected in the total density of state if you see here the formation of bonding and anti-bonding states is more distinct so we should expect more effect in total anisotropy so next we calculated the total anisotropy and I found that for cot cobalt interface there is actually switching from in plane to out of plane so this is because because of the strong hybridization the delta prime I was mentioning in my simple model the delta prime is such that the total total anisotropy becomes positive and if you plot the layer wise anisotropy you will find the very similar thing to benzene but here the magnitude of these four atoms which are connected with the cot molecule has more value like is much more than the benzene then I also plotted the like the transition matrix between between different d orbitals and I found that this transition element between d by z and d z squared which was negative here and which became very small or zero for benzene cobalt system for in contrast to that here it became positive so totally I mean it's giving positive anisotropy next I also studied how the molecular coverage affecting the anisotropy of the slab so for that reason I considered different molecules with increasing surface area and I found the trend is very clear here and especially if you consider the surface layer anisotropy of the stop surface then you will find that there is an in plane to out of plane switching next I also studied the dependence of anisotropy of the cobalt slab thickness for that I considered different layers of cobalt slab starting from 3 to 10 though their magnitudes are oscillating are different for different number of layers the trend is very similar for all of them for all of them for bare cobalt it has negative anisotropy you are putting some molecule there is some separation of in plane anisotropy this is for one representative case the layer wise anisotropy for 8 number of layers and this is very similar to the result I have shown you for 5 layers now I also studied for another facet of cobalt HCP and for HCP in contrast to FCC bare cobalt slab has already positive anisotropy so when you are putting molecule it is basically when you are putting some molecule you are basically adding adding the positive anisotropy so it is basically enhancing the out of plane anisotropy but the physics whatever I described it is coming from metal molecular hydrophilicization then finally I I did some calculation for 2 metal molecular interface which are experimentally observed the first one is C60 cobalt system so for C60 cobalt system I considered 4 different type of configuration and I found this one is the lowest one and you can see from this layer wise plot that again the atoms which are connected to the molecule this atom they are giving like out of plane out of plane component of the anisotropy but with with the I mean varying magnitude and that magnitude depends upon the strength of hybridization so just to show I mean I considered 2 different atoms one is with highest anisotropy and another is this one and if you see the density of state for this atom it shows clearly the formation of anti-bonding and bonding state where is for this atom this is I mean the hybridization is comparatively very weak that is why it is very close to bare cobalt system and next I did it for another system as I told for LQ3 and cobalt and experiment from experimental result it has been reported that it favors out of plane anisotropy and as you can see here here also the atoms to the molecule they are showing perpendicular anisotropy and I have chosen 3 different atom just to show their hybridization kind of hybridization with the metal with the molecule and the strongest one is with the highest anisotropy with that I came to my conclusion so I studied the modification of magnetic anisotropy of the thin cobalt films upon the adsorption of molecules and the main physical phenomena responsible for the modification is cobalt digit square and carbon PZ hybridization we used a simple model based on second order perturbation theory to identify the electronic transition promoting in plane or out of plane transition and then we also showed that the suppression of the in plane anisotropy depends upon the choice of the molecule and as I showed that it can switch from in plane for that for COT molecule and then there is a possibility to engineer out of plane anisotropy by increasing the molecular coverage so we are preparing the manuscript I think it should be in online within one or two weeks and these are my I want to thank for the funding agencies and computational resources and thanks for your attention thank you very much for this nice talk yes ladies thank you thank you for your nice talk I have a question you previously mentioned that you have constrained some bottomless layer of your cobalt I just want to know if you have applied a dipole collection in order to obtain a correct absorption energy and also another question is as cobalt as 3D orbital I just want to know if you have used the DFT plus you and which value of you have used so for the first question I applied the dipole correction but then I also calculated the anisotropy but I found almost there is no change so I did it for one system and then I just followed without any correction for other systems and what was the second question DFT plus you cobalt is more directly correlated so for these calculations are GGA I didn't use any U plus U correction okay I think Antimoid so thank you for a very nice talk so my question is about the orbiter moments some point you mentioned this Bruno model you show this plot where there is an orbiter moment and then you have to charge each atom right yeah this one so you are calculating a local orbit and the question is then how do you do that because this is from the DFT right so it's the DFT calculation which is behind the orbit I mean I did switch on the spin orbit interaction and that will give you the orbital moment for the orbitalization how do you define the fact that it's associated with that specific atom specific atom probably I didn't get your question so there is an orbiter and then an atom index so I imagine this is each point is associated with specific atom but I mean when you are switching on the spin orbit interaction you will get the orbital moment power atom like for each and every atom like the way you get also the spin magnetic moment so if you switch on the spin orbit interaction you will get also the orbital moment for each and every atom and then that I just considered thanks ok further questions no well I f1 myself so when you put these kind of closed shell molecules like benzene molecules C60 on the metal surface then the strength of the interaction will depend very strongly on the level of theory you are using if you include van der Waals or not and all these kind of things which will then as a follow up change very much how strongly distorted your surface atoms and so on have you looked at the influence of these things on your results I didn't look at the influence of this thing but I found for especially for HCP cobalt there is strong effect of surface distortion but for FCC cobalt I mean there is almost nothing I mean if you see this plot yeah this one like I put the molecule and the surface atoms got distorted because of the bonding then I removed the molecule and I didn't optimize the structure and I calculated the total anisotropy and it is very similar to the bear cobalt so it shows that the main reduction is coming from the metal molecule hybridization yes and then it would be interesting to see how strongly this depends on the surface molecule distance which you have that I did actually that I did I don't have the result here but that I did I just shifted the molecule like from surface to this will depend very strongly as I said on yeah I mean this goes from chemical absorption to physical absorption you mean like but I mean different functions will give you very different distance okay I didn't check for different functions okay no further questions okay so thank you very much again okay so we will now switch gear and move from spins and magnetism to quantum computing okay but this is not your slides we are seeing it should be how strange are you sure you have connected it okay I can reconnect it help us on the way and he will answer the question about what quantum computers can do for material science and kind of I very much hope that the answer will not be that all DFT people okay so please well thanks for the introduction and I would like to start thanking the organizers for inviting me here so first let me tell a few words about where I work so I work in a private company who traditionally says high performance quantum computers the kind of computers where you run your DFT computations and they have customers like you who want to understand whether they could get some help from having some quantum processors inside these HPC quantum systems so in the quantum R&D lab where I work the question we are trying to address is in if and how we could make use of some quantum processing power inside a supercomputing architectures and in fact last time I came to Trieste I was on the classical side so I was trying to solve material science problems with classical methods and I kind of always run into an exponential wallet and as you all know when you do strongly correlated physics there is always an exponential somewhere so if you do a diagonalization of your Hamiltonian or FCI techniques there is an exponential in the size of the system if you do quantum Monte Carlo there is an exponential in size or in temperature or both or when you do tensor network techniques there is also an exponential in the entanglement in your wave function so whatever you do on a classical computer there is somewhere an exponential so if you are doing your computations you always try to find smart ways to go around the exponential problem but in some regimes of interest you are stuck by this exponential side on the other hand now for a few since a couple of years we have first prototypes of what people call quantum computers these quantum computers are physical experiments with for instance superconducting aluminum with trapped ions which are trapped in quantum dots or photonic circuits or for instance Riedberg atoms which are trapped which are trapped and the premise of these quantum computers is that they since they have quantum physics inside they should not have an exponential wall because they have entanglement built in a way and I put a question mark here because what I will try to show you today is that indeed quantum computers do have some promises of some speedups compared to classical computers but they also have exponential somewhere in the form of decoherence which kills the quantumness of the machine as time goes by and that's a big issue today that people are trying to solve to make them helpful for solving strongly related problems and so the outline of my talk is going to be in two parts so first I'm going to be to go more on the optimistic side let's say and try to tell you what quantum computers can do today to solve many body problems and then I will try to shed some night on some experiment that you probably heard about a couple of years ago Google claimed they had reached so called quantum supremacy namely that they had done something that a classical computer could not do and I will argue that their claim can actually be countered by using many body techniques on a classical computer so let's see what quantum computers can do for many body problems so as I said quantum computers are just controlled many body problems they are more less controlled they are more less open but they are many body problems which means that from a theory point of view what they realize is some strongly correlated problems for instance superconductors realize a Bose-Herbert model trapped ions realize a spin boson model trapped spin in a quantum dot realizes a Fermi-Herbert model or in a low energy limit the Heisenberg model and similarly readberg atoms depending on which levels you are looking at and how they couple to each other may realize an Ising model or an XY model and why am I doing this list of models is to tell you that since there are synthetic many body problems they can explore exotic portions of large Hilbert spaces and this is what you want to explore when you want to solve material science problems and now I'm going just to introduce a bit of vocabulary about quantum computers the building block of quantum computers is what is called a qubit and a qubit is nothing more than a quantum two level system or a spin which has zero or level one which you may think of as level or as spin down and spin up spin up if you want and the key property of this two level system is that it can be in a linear combination of the two basic states or in a quantum superposition and when you put several of these qubits together then the wave function can be decomposed for instance for two qubits as such and you see here that if you have two qubits you have four components to the wave function and if you have n qubits you have two to the power n complex coefficients which makes it hard to simulate a quantum computer on a classical computer because in principle you need to store two to the power n complex coefficients which of course you try not to do in practice and when people say I want to do some quantum computation what they actually do is that they want to manipulate the state of their quantum computer by basically evolving the Schrodinger equation and the degree of freedom they have in order to manipulate this state side is just they are going to devise a clever H of t so their Hamiltonian as a function of time is going to be what they change over time to reach some sort after state and at the end of this evolution they can measure the state and there's a second key property of quantum computers is that measurement is probabilistic as you know meaning that suppose you are in a state in this superposition state and you measure along the z axis then you are going to get for instance state 01 with probability given by the squared modulus of the coefficient so that's just a quantum mechanics 101 and so you may have heard of quantum circuits quantum circuits are just a language to describe what your Hamiltonian as a function of time is or looks like so for instance if you see such a diagram that means that you have here five qubits because you have five lines so five two level systems and if a box here which is called a gate acts only on this qubit here it means that the Hamiltonian it corresponds to is a Hamiltonian that acts only on the first Hilbert space so it's basically for instance sigma x times identity and for instance if you see a gate here that acts on qubit 01 and qubit 03 it means that the corresponding Hamiltonian acts only on the first Hilbert space first qubit and the third qubit or the third Hilbert space so that's what people have in mind when they see a quantum circuit when they draw such a quantum circuit and now there's an important distinction to be made and that I will use throughout the talk that the quantum computers that we have today look more like these boats here that may be in the Trieste C in the summer so as you can see it's a boat which is quite small and not very stable and this is what quantum computers of today are like meaning that they have a very small number of qubits from 50 to 500 and they have what is called decoherence meaning that a state which is in super position like 0 plus 1 over square root 2 which we could say 0 and 1 at the same time like Schrodinger's cat becomes 0 or 1 or dead or alive after a characteristic time which limits the number of gates that you can actually run on your quantum computer and in practice today it means that you can run quantum circuits or quantum programs of a hundred to a thousand gates and after that basically your state becomes classical again so you've lost the quantum properties of course that's not what you want in principle in the future what people would like to have is more something like this so which is a big machine which is stable and large and with these quantum computers with a lot of qubits and of good quality what people hope to reach is hope to do is something called quantum error correction is a process by which you group qubits together so that when they are together they're better than a single qubit individual qubit and this is shown in such a graph here where here I plotted the logical error which is the error of several qubits grouped together versus the physical error or the individual error rate and you see here that when the physical error goes below a given threshold so when your qubits are good enough when you put them together it actually helps so when I put more physical qubits together you see that the curves here go down so the error rate is better and that's a very important property and that's actually the holy grail of today's hardware makers is to try to have physical error rates which are below the error threshold and then they know that when they group qubits together they will be better and better and even exponentially better important property but today we're standing basically here in this region so today we cannot do quantum error correction so what I'm going to talk about today is what one can do with so-called NISC quantum computers and the goals that I'm going to set for myself is let's say I want to solve the electronic structure problem and I'm going to write it in a second quantized formalism with a kinetic term an interaction term and I have N orbitals you see here that I have N to the 4 terms in my electronic structure Hamiltonian which means that roughly speaking my Hamiltonian is a sum of terms with a number which is roughly speaking N to the 4 terms that's what I'm going to start with and what I want to do is find for instance the ground state of this Hamiltonian and if I can do it then probably I can extract some forces and at some point I would like to have access to excited states to do spectroscopy and I may also want to do dynamics for instance but today I will mainly focus on ground state energies and something that one would like to do with a quantum computer which has no errors or which is error corrected which we don't have today in the lab is something called a quantum phase estimation let me tell you a bit how this is supposed to work so suppose we define a unitary function exponential minus IH which is the Hamiltonian whose energy I want to find and if I find an eigenstate psi of this Hamiltonian this is also an eigenstate of the unitary such that if I apply the unitary to psi I find exponential of minus E so that E the energy that I'm looking for is the phase, it plays the role of a phase and now if you want to find a phase in physics what you do is interference or interferometry so this is a young slip kind of diagram where I shine a light and if the two and since I see here with my two slits I defaced the two beams with respect to each other and I can see an interferometry pattern on my screen and now if I put several of them together I see an interferometry pattern which is more peak so that I can have a more precise estimation of my phase well what a quantum computer does when a computer does with quantum phase estimation is precisely interferometry and the circuit the quantum circuit that is done in the quantum computer looks like this what you do is that so you have to think here this is like this qubit zero is like your light that you start with and you split it into like with two young slits so now the state after this gate here which is not the Hamiltonian but a gate called the Hadamard gate then you're in a superposition state then you have a gate here which is a control unitary you don't really need to know what it does but what you know is that after this gate the state of your qubit here is zero for the first ray of light if you want so this would correspond to this beam here plus some defacing times one so that would correspond to the second beam here and then you apply again a Hadamard gate to have those two beams interfere with each other as you would do with Machlinder interferometer for instance and then if you measure here this qubit the probability of getting zero is going to be the famous interferometry pattern now if you if you replicate this idea to a case with many of these slits you have in fact more qubits here to do the interferometry experiment and at the end what you need to do is not a Hadamard gate but an inverse Fourier transform as we do in interferometry what you read on the screen is basically the inverse Fourier transform of the transmitted intensity here and when you read something here what you will read is actually the phase of your unitary which is the energy of your system and when you do this procedure you actually get an exponential advantage with respect to what you can do on a classical computer now this is all good but in practice you cannot really do it on today's computers there are many reasons for this I can give you some we need to prepare an eigenstate to start to start this this eigenphase estimation procedure which is something which is complicated to do because if you want to prepare an eigenstate you need to know about it or you need to do an adiabatic preparation for instance which is a long process and therefore the coherence will probably kill your state before you have prepared it now suppose you have prepared it then you need to do this controlled unitary operations in the phase estimation procedure and these are basically give rise to very long quantum circuits and they are too long for the coherence times that we have today and finally also the Fourier transform is also an operation which requires the number of gates which is incompatible with today's coherence times so that's something that is very nice in textbooks but we cannot really do today so now what I'm going to focus on what we can actually do today which is something which is called the variational quantum solving it's not new it's just a variational method what you suppose you have is that you you give yourself a family of variational states theta are going to be my variational parameters and I know that the matrix element psi theta H psi theta is larger than the grand state energy so what I'm going to do is try to optimize theta in order to minimize this variational energy and the way that I'm going to do it on a quantum computer is by having a combination of a quantum computation and a classical computation I'm going to use my quantum computer to compute the energy of the quantum state this is a good idea because the quantum computer is probably better suited to do this than a classical computer then I'm going to get my energy and find new parameters and so on and so forth until I converge my variational parameters the classical computer is going to give my new parameters and so in practice the way it works is that let's say I start with some variational parameters I prepare a state on my quantum computer using a quantum circuit with some parameters which can be rotation angles for instance at the end here I can measure some observables and these observables are basically the observables which combine together give me my Hamiltonian so these are the OI terms and my Hamiltonian so they are n to the 4 of them if I have n orbitals then I repeat this measurement over and over again to get my averages so there's here a statistical error that I'm doing then I combine the OI together on my classical computer I get my energy and based on the energy or its gradient I can get new parameters and so on and so forth so in principle this is cheaper than a classical computer because the classical computer here this would require an exponential computation at least in principle so all I gain compared so the good part of this method is that here I can choose my variational state such that it's compliance with the limitations of my quantum computer so I can choose a circuit which is short enough that it can fit within the coherence time of my quantum computer then I can also be clever about how I initialize my variational parameters I can pick an active space I can do all the tricks that I can do in classical computations I can also be clever about how I group the terms in my Hamiltonian so for instance if I know that some operators commute with each other I know that I can compute them simultaneously and so on and so forth so the point is that today what we can do with this method is do first proof of principles computation for chemistry or material science on very small molecules that would probably make you laugh when you look at the size of molecules you can do in a classical computer but nevertheless these are proof of principles and you can also do excited states with this method and most importantly these proof of concepts are starting points for more sophisticated methods that I will talk about a bit later of course there are some downsides that as always in variational method if you have too many parameters you may get stuck in in plateaus of the landscape for instance here you also have an issue that you may have too much short noise in the sense that when you compute an average in quantum mechanics you have a finite number of times that you do your measurement and that means that you do you do an error statistical error instead of getting the true expectation value and also probably if you use a very short circuit to prepare a state the state you're going to prepare is something that is not so interesting it's probably not entangled a lot because if you don't have enough time to entangle your system the state you're going to produce is not entangled enough so this means that today this method is probably too slow to get something which is going to be competitive with classical methods and also not enough accurate due to the intrinsic noise or the coherence of quantum machines today but let me let me nevertheless give you a small example so here let's take an example this is a CO2 molecule and I would like to have the energy landscape with respect to the bond length here and the angle here between the between the two bonds I have my Hamiltonian and here I know that the classical gold standard method for this type of molecule for instance could be a copper cluster with singles, doubles and maybe perturbative triples now I can turn this method into a quantum method by making it unitary because since I'm doing a Schrodinger evolution I'm always doing unitary evolutions on a quantum computer and therefore the ansatz or the variational state that I'm going to propose is something called unitary copper cluster which is basically a unitary version of copper cluster theory now that means that I need to prepare on my quantum computer a term a state which is the exponential of these terms times the Hartree-Fock state for instance and then I'm going to optimize the parameter theta here so there's a whole machinery how you do prepare those states then one thing you need to do is convert this exponentially in a circuit and for this you're going to do a very simple trick that we know from classical theoretical physics which is to compose the t minus t dagger as a sum of operators and then I approximate the exponential of a sum as a product of exponential and for each term I can then build a small quantum circuit so this is something that people know how to do routinely today the problem is it gives very long circuits and so what people have come up with in the recent years is methods to reduce the size of the circuits without using accuracy and one example is an iterative technique where instead of having a variational state which is given from the onset they grow the size of the variational state adaptively and the way they do it is that when they want to grow the size of the variational state the new operation or the new gate they add is the one such that the gradient of the energy with respect to that to the variational parameter is the largest so that their gradient their gradient descent goes as fast as possible and this can give quite very small results here are results we obtain in our group where you can see here the grand set energy as a function of the variational iteration for several molecules up to 20 qubits which means 10 orbitals which means it's 10 orbitals times 2 and so you see here that you can reach chemical accuracy a couple of milli heart rate really fast with this kind of method and of course I should warn you here that these are simulations without noise without decoherence so it's all good in a way let me give you yet another example which is another game that one can play in order to make the most of what we have today as quantum computers so let's suppose I start with my electronic structure Hamiltonian of course I have a freedom in how I choose the basis in which I write down this Hamiltonian I can do a basis rotation from let's say the basis C C Q dagger to C T the Q dagger and this doesn't change my problem it just changed the representation of the wave function and of the Hamiltonian and of course I can try to exploit this freedom because different bases have a different sparsity and I can use it for my quantum computation why so let's suppose the state I'm looking for in my original basis is 1 1 or up up let's suppose my new basis is C 0 plus C 1 over square root 2 and C 0 minus C 1 square root 2 then the same state in this basis is written in this way now if I ask you to build such a state on a quantum computer this one is much easier than this one because basically you have more terms so you have more gates in the circuit to build this one which means that in one basis you have a very short circuit so the coherence is not going to kill your computation but in this basis the coherence is going to kill your computation so you'd better find the basis in which the way you write your wave function is the shortest and there's a one known basis to do this which is the natural orbital basis which is used a lot in quantum chemistry and in material science this is the basis that diagonalizes the one particle density matrix and so what you gain basically is that if you go to the natural orbital basis you need shorter circuits and therefore you need less coherence to build the state of course it comes at a price probably the Hamiltonian expression in this basis is not as sparse as the one you had in the original basis so there's no free lunch but basically here what you did is you put more burden on the classical computer and less on the quantum side and this is the game that people are trying to play today to adapt to the coherence they have on their quantum computer by adding more classical power so it's really a game of hybridizing hybridizing the two computations and another example of this hybridization of classical and quantum computation is suppose you want to tackle a problem that I do not need to present here which is the Hubbard model one way would be to attack it directly on a quantum computer another more clever way would be to say let's do some embedding method like DMFT or Goodsviller or slave boson or any embedding methods or DMET all the embedding methods you may think of this kind of method allows you to do a mapping between the original extended model which can be in the thermodynamic limit to a model which can be called impurity model or fragment which is much smaller and therefore can fit on your quantum computer and then you can solve this much smaller model and yet still a many body problem on your quantum chip and this is something we did in our group and here so as an example of results you can get is I plot here the quasi-particle weights which is an order parameter for the much transition as a function of the Hubbard interaction at some point what you expect is a quasi-particle way to go to zero which means you have turned we have gone from a Fermi-Liquid to a Mott insulator and what you can see here is that this is what you can get on a classical computer because here of course in this case I can do this computational laptop so on a laptop I would have gotten this black line here and what I get on a deco here quantum computer so doing emulations is these yellow points here which as you can see are not perfect they are far from the black line but this is the best more or less the best you can do today so it gives you an idea of what you can do today if you tune your computation by doing some tricks so this is the conclusion of part one which is the largest part of the talk so today we have hardware and software progress we have many variational algorithms and sophistications of them but we don't have useful quantum supremacy yet two years ago or three now Google claimed that they had reached quantum supremacy and I'm going to explain the five remaining minutes why on the one hand we have something which is not useful and on the other hand some people who claim we have very powerful quantum computers so just give me so let me tell you what Google's game for supremacy was what that is is that they took their computer which had 53 qubits and they measured bit strings so they measured the state at the end here of their computation if they had a perfectly bad quantum computer they would have got they would have gotten uniform bit strings so basically the probability of getting one bit string would be one over the number of bit strings which is one to the which is 2 to the power n 2 to the power 53 so if they were to draw the histogram of their bit strings with a perfectly random quantum computer they would have gotten a peak here centered at one over 2 to the power n if they had on the other hand a perfect quantum computer they would have gotten a distribution here called Porter-Thomas distribution and the goal of their experiment was to understand how far the bit strings they had gotten out of their quantum computer was from the Porter-Thomas distribution and to do this they measured a quantity called the cross entropy benchmarking fidelity which we don't really know we don't really care what it is what we're going to see is that we can have a rough idea of what this quality is by just making a product of the quality of each of the successive operations that they did in their quantum computer so here I'm basically doing a multiplication of the fidelity of each operation so two qubit gates one qubit gates without errors and what I get here is a number dot 15% so it should be 100% for a perfect quantum computer so the quality of Google's results were dot 2% if I do a back of the envelope computation and that's also what they measured when they did their computation yet they claim that with such a quality which is not good they had something which was much much better than what classical computers could do and the reason why they claimed it is that it's really hard to do a classical emulation of a quantum computation the way people do it is usually so there are many ways one way is that you represent your quantum circuit as a tensor network so basically these tensors here represent these boxes here represent tensors and you basically contract or multiply matrix with each other and since the matrix size is exponential these computations are always exponential and some people have tried to catch up with Google and the first estimations were we could do this in two to the point days then some people went to also 20 days, 5 days a couple of seconds so there were lots of attempts to reproduce Google's results but they all had a flaw which was that if Google were to add one qubit or one layer of gates to their computation the time of these methods was exponential because by default it's all exponential so Google was right to say they didn't really care about these claims in a sense so here what we came up with is another idea which is let's use the fact that entanglement is important in quantum computations or in other words if I have a classical state which is not entangled then it's very easy for me to do a classical simulation why because if I have a classical states or product states or a slater determinant if you want I can always represent the tensor corresponding to the wave function as a product of the amplitude of each of the Hilbert space states and I can now try to refine this statement by saying if it's weakly entangled then it's not going to be a product of potatoes but basically a sum of a product of a few potatoes which is called a matrix product state in the tensor network parlance and I can also then once I have a matrix product state representation for my state I can try to compress it by reducing the dimension between these potatoes between these tensors and this is the game that we played and I'm going to cut to the to the final slide to tell you the final results which are here here this x-axis here is the compression that I do or it's the inverse of the compression so here I compress a lot on the left and I compress not a lot here on the right so here the computation is quite fast and here it gets longer and what you see here in gray is the level of noise, the level of quality that Google could reach so my goal here is to go into the gray area if I go in the gray area then I won the game and let's focus here on the dash line which is our best result you see here that if I don't compress a lot my average error per gate is 5% and as I increase or decrease the compression level or increase the bond dimension in my tensor network you see here that I can go into the gray area and the time it takes me to get to the gray area is a few hours so it's not the 200 seconds that it took Google to do the computation so it's still more time but my claim here is now that my method since it's a tensor network method it doesn't scale exponentially it scales linearly with the number of qubits so I can go to 200 qubits and I can still produce samples which are sampled with the same distribution as Google in a time which is comparable with Google but here what allowed us to do the simulation of the Google simulator is that the entanglement level that they had reached in their quantum computer was so low that it was easy for me to do it on a classical computer with a tensor network computation I'm not saying that this was an easy computation for us we thought a lot about how to do this matrix product state representation and how to do it in a sophisticated way but it tells you that today what we learned is that with today's quantum computers the level of entanglement that you can produce in the wave function is not high enough that you cannot do the same thing better on a classical computer yet there is really steady hardware and software progress that makes us think that realistically in a couple of years I don't know how many don't ask me we will probably get to a point where for some parts of a quantum many body computation you could use a quantum processor to speed up some parts of the computation and with this I thank you for your attention and sorry for the overtime thank you very much Thomas perhaps I use my position and ask the first question because what is for me not clear is why with these quantum computers one can only do unitary propagations because what we would like to do is e to the minus h times t not e to the i h t so it's a very good point and in fact people have tried so people have tried to do imaginary time propagation to get to the ground state and so it does work on very small problems the problem is that since it's not natural to do it on a quantum computer since the evolution is unitary you need to play games or there are some tricks to do this imaginary time evolution and this incurs an overhead which is in fact exponential in the correlation length of the state that you're trying to propagate in imaginary time so that by doing this kind of weird thing on the quantum computer that is imaginary time evolution you have to pay an exponential price in the correlation length of your problem so people are trying and they have tried there's another you could do it in a slightly more sophisticated way by doing afqmc type of computations where you do an imaginary time evolution but then you transfer your interaction term and then you try to use your quantum computer to compute the overlap that you need when you do afqmc by using a guiding way function and this is something that Google tried for instance and they claim that it works well so there is a way in a sense to do imaginary time evolution but it's more sophisticated good so questions from the audience this is here exercise thanks for the very clear talk I had a stupid question so in the example of the Hubbard model you plot z as a function of u and for small u or even zero u the result is surprisingly bad what is the bottleneck there yeah so it's a good point so here that's because we were too honest so here what we did is a computation where we took the same variational state for all the different use meaning that we kept the circuit with fixed length to do the computation for small u and for large u which is too honest in a sense because we know perfectly well that for small u a very short circuit or a circuit which produces a very a state which is very close to a slider determined would be enough and so if we had used the adaptive method that I had mentioned just before so the fact that we can grow the size of the ansatz in an adaptive action then we would have a point which is really on top so really the achievement of this method is the fact that for u equal to which is very close to the bond transition we get a point which is very close to the black line but indeed this is a very nice remark okay other questions yeah obviously on the other side thanks for the nice talk I may have missed the point but in the physical realization of the qubits you said that you implement some kind of models the upper model, the spin boson but the quantum computer should be universal in the sense that we want to solve I don't know superconductivity or the ground state so is this possible so there is an overhead in let's say mapping one problem like the spin boson on the upper model and what is this cost I understand that you can interface your quantum computer with a classical computer that does this mapping you do the simulation and you go back what is the effective cost in doing this in your implementation and eventually in a real full quantum computer okay so it's a great question and indeed it's something we have to solve before doing the computation so for fermions it's quite straightforward in the sense that fermions are occupied or empty which is very similar to a two level system so the mapping is quite straightforward the price to pay though is that a term which is local in a fermionic Hamiltonian is going to be non-local in the spin language or the qubit language because it has to you have to pay for the anti fermionic the anti commutativity property of fermions so for fermions it's quite straightforward but you have to pay a price for bosons it's a bit less straightforward because bosons have several levels so basically if you want to keep let's say a number two to the power m excitations for one bosonic mode then you need m qubits probably if I did the math correctly so you can map a bosonic mode to several qubits and you choose the number of qubits to basically match the highest number of excitations you think there are in this particular mode but you can do it and the difficult part is that there are several ways to do these mappings so for instance to map a fermion to a qubit there are at least three or four ways to do this and they have different properties which basically the difference between these different mappings are roughly speaking visible in the Hamiltonian that you produce at the spin level so for instance if you start with a Harvard Hamiltonian you have only local interactions if you use one of the mappings you will have an interaction which becomes non-local so if you use another mapping which is called Bravi Kitaya for instance then you have something which becomes logarithmically local so it's local plus some logarithmic corrections and this is something that also people are playing ways to try and optimize so this is also a big study a big field of study also in this field okay are there more questions obviously we are on the other side anything more here on this side I will not come back very interesting talk I had a kind of naive question that I never really had straight in my mind so I was under the impression that and maybe also you touched on this that if you had a state and you wanted to propagate it in time that clearly was something that was sub-exponential but in principle if you wanted to find the ground state of a general many body Hamiltonian then that was you still had to pay an exponential cost even if done on a quantum computer is that correct? that's correct so that's more or less hidden here in the fact that when you do here your phase estimation experiment there is a propagation here u to the power 2 to the power n which is exponential in time so the difference is that if you do imaginary time propagation on a classical computer you also have to do this on an exponential time to get what you want to get but in addition you have some sign problem which is also exponential if you say do AFQMC or Monte Carlo methods here you have to pay the exponential price of the duration for which you're doing this run but there's no sign problem because we're not on a classical computer so that's maybe another important observation that when you compare classical computation to a quantum computation what matters is the ratio of the run times for both and this is actually something that people who do quantum computations don't show often they say oh I did a molecule but they didn't they don't say that with your laptop you can do this molecule in a much shorter time so what matters is really the ratio and also the scaling of this ratio okay so thank you very much Thomas again so this concludes the session of talks and I remind everyone that we have posters very soon in some minutes in fact and the posters are outside here in the outside this hall and in the corridor around and I feel that Nicola has something to say yes for those that don't have their own poster please you can pick them at the you can ask the guard to pick them up there are still five people that have not taken the poster so okay that's it so enjoy