 And people, I think, will continue arriving. But I think for the purpose of time, we should start right on time. So good morning, good afternoon and good evening. I am Nicola Seriani. I'm one of the organizers of this 20th edition of the Total Energy Workshop together with Francesco Mauri from the Sapienza University of Rome. And Tanusri Sahadas Gupta from the Bose National Center for Basic Sciences in Kolkata. So I welcome you here and I give the word to Tanusri, one of the other organizers who will also say a few words before I leave the world to ICTP's director Atish Dabholkar. So Tanusri, do you want to say. Hi friends and colleagues from various parts of the world. Very good morning, good afternoon, good evening, whatever is appropriate. On behalf of Francesco Mauri, Nicola Seriani, and myself, Tanusri Sahadas Gupta. We are now at the 20th International Workshop on Computational Physics and Material Science, Total Energy and Force Methods, which will be held on online platform. Now, due to the present pandemic situation, the total energy meeting this time it's a, it's unprecedented but this is happening, neither in Trieste has done traditionally, nor in the originally planned dates, but happening in slightly delayed dates. And also with the online micro version of the meeting, as you have also realized it's not the full length of the meeting that we normally have. And this decision of online micro format was taken not so long ago. So we had to really work within a short time scale and on a new platform that has not been really tested before prior apologies to any difficulties that may cause due to this. As you have also noticed this event will take place daily from 1.30pm to 5.30pm Central European time to allow participation from many different time zones across the world. And beside three plenary talks, there will be 12 contributed talks and three poster session. I request you to attend all the session as we believe all of them will be exciting, covering a burst range, starting from method development to energy, magnetism, 2D materials, surfaces, nano scale, atomic and molecular projects. In these few words, I would like to welcome all of you once again and looking forward to exciting presentations, simulating discussion coming three days. Thank you very much. Thank you, Tanusri, and I will now give the world. We also have the ICTP director here in recognition of the importance of this event. So, let's welcome Attish Dabholkar. Thank you, Attish. Thank you, Nikola. It's a great pleasure for me to be just, I just want to say a few words. I think for an outsider like me, you know, it's impressive how far this field has come in computational physics and material science that you can really, you know, design materials in this manner, you know, on a computer. And ICTP has supported this scientific community for a long time and it's great to see that it has evolved to have a workshop with, as I understand, 950 participants today. So that's quite impressive. And I'm also very happy to have very distinguished speakers today, Professor Parinello, Professor Spaulding and Professor Zunger. And some of them actually are known to ICTP for a long time. For example, Mikele is the Dirac medallist and he has been a member of our scientific council for a long time. So, welcome you all and I wish you a very productive and interesting meeting. Thank you. So thank you very much, Attish, for these nice words. But let's take also a couple of minutes to say who is hosting us. What is ICTP? So I will now briefly share my screen to show you where you are, at least virtually. So I hope you see my slide. So this is where you are now. Virtually, this is the ICTP, which was founded in 1964 by Nobel laureate Abdu Salam. So really, I cannot see. Everything is dark now. Okay. It showed for a while, but then it went dark. Okay. Yes, I think I should assume sharing. Maybe you can stop sharing and start once again. Yeah, maybe I will do that. Let's share the screen again. So do you see it now? Yeah. Okay, you can start from the first pages. Okay, yes. So you probably. Okay, so sorry. So this is ICTP. So I hope now you see it all so this is where you are at least virtually. This is Institute was founded in 1964 by Nobel laureates Abdu Salam to enhance international cooperation through science or experience firsthand how difficult it was to do science in some countries and he wanted to overcome that through an international institute that would support scientists everywhere in the world. And therefore these institutes now combines the two tasks of performing high research at high level with with the task of building scientific capacity in the developing world and it's a is part administratively part of UNESCO is governed by an agreement between Italy UNESCO and the International Atomic Energy Agency. So we are active in research and locations and outreach. We have a vibrant program of of exchanges and and conferences and visits now virtual mostly, but, but not completely, but we really welcome scientists from from all over the world. So in this period we are now I stopped now sharing. We organized this conference of course it's not the usual format full format but we decided to that it was needed to keep the this community together. And we decided therefore to open change a bit also the structure to allow for contributed talks, which is unusual for this kind of a workshop so also given the unusual format and conditions. We have other changes so usually the total energy workshop is the place where the Walter corn price is is announced and given, but this year we have some, I have an announcement to make regarding the Walter corn price for quantum mechanical materials jointly instituted and co funded by the quantum espresso foundation. So the price is normally awarded be annually on the occasion of this workshop to a young scientist for outstanding contribution in the field of quantum mechanical materials and molecular modeling. Found in a developing country or emerging economy with emphasis on first principle techniques. However, this year ICT P and the quantum espresso foundation have decided that considering the short and virtual character of these years total energy, the price will not be awarded in 2021. So ICT P and the quantum espresso foundation look forward to to award the next Walter corn price of the next total energy conference in 2023. So in two years. Nominations already submitted for this year will be automatically considered also for 2023 together with any nomination that will be submitted on the occasion of the new call for nominations, which will be announced in spring 2022. This, this is an important communication so don't worry you will still be considering two years. So now let's turn to some practical issues. So we soon start the first planetary talk by Alex Sungar and I will soon give the word to Emilio Artacho so as you see some of you have already tried to use the question and answer session. Option so which is will be one way to ask question, or alternatively or in addition you can also raise your hand. So, depending on time we will make either give you voice or we will read your written question we will see a bit how it goes. When you ask a question, especially by voice we invite you to give to say your name and affiliation so it will be a way to make the thing more a bit more interactive. And the second thing I hope you have received all the instruction how to connect for the poster session we are looking forward this will be also very interactive with many rooms so I hope you have all the instructions. And with this I welcome you to the to the to the to this workshop, which we are going to start so I, I say thank you again to Professor teach the whole car for coming to the, to the opening session and I give the word to the chair or the first planetary session to Emilio Artacho will now take over for me. So thank you very much. Hello, good afternoon. Good morning. It is really a pleasure for me to be starting with the first planetary session, and it's really an honor to be introducing Professor Alex Zunga, he is really a pioneer of this community, he was there on the very first calculations of total energy and forces were were done when he was putting together together with Marvin Cohen and others, the first calculations of the kind we all do and love. So it's really great to start this conference with his talk. And let me just give him the opportunity to talk himself. He's going to tell us about the essential physics needed to understand the basic electronic properties of 3D oxides. So Alex. Thank you very much, Emilio. And good afternoon everybody. And thank you for logging in for this meeting. And before I talk about the subject today, I wanted to say a few words about the subject of the meeting since it is the 20th issue of this meeting of total energy. And it still happened that I recently found out the referee report of the very first paper on total energy and forces that we did. And I was going to show you the referee report so you can have a good perspective of what happened and how much people did not like this field 40 years ago. And so everybody probably knows that the problem was total energy was traditionally that for a periodic system, each term in the total energy diverges. And by the 70s, there was no expression that included. I assume everybody can see my cursor correct. Yes. So there was a problem that the different terms were diverging and there was no expression for that. But at the same time everybody wanted total energy because the chemist had total energy for molecules. And we wanted to have that for periodic solids, except that it was difficult to find. So some of the first steps of total energy was specialized to a spherical potentials like muffin teen or cellular methods. And in this case the calculation reduced to one dimensional integrals. And the problem of the urgency of electron electron electron ion was not it was very easy to to handle. In those days, the theory was applied basically to electronically closed shell elements and close back structures for which spherical densities and spherical potentials were considered reasonable. And this work was before my time was Frank Averyl, a snow who did copper, Janak who did alkali metals and rare solids, Jack Sabin who did neon with the one dimensional intervals. And then Janak and Williams did the alkali metals and so on. The point is that at this stage in the mid 70s, it became clear that we are interested not only nickel and high defect metals, but also in open structures like semiconductors, insulators, oxides, and so on. So actually my first postdoc with art Freeman was developing the first a close form for total energy and three dimension without any shape approximation. And this was applied to lithium fluoride, diamond boron nitride and so on in the 1976 77. And we remember the first result of total energy in those days, it was all electronic CEO. The first time we saw that actually, while the total energy was very good in terms of lattice constant and bulk modulus and so on. At the same time, the same method gave very bad a band gaps and very bad transitions. In fact, the band gap of diamond was about half of what it is experimentally. I remember art Freeman was very concerned that we will publish a paper on density functional, showing that the band gaps are about 50% wrong, even though the total energy is very good in terms of ground state properties. But in any event I convinced him that I should put a section in this early paper on self interaction correction that explained in part why the band gaps were wrong so he let me publish that. And then when I moved to Berkeley in 77, I guess, there was the really the main idea to extend this kind of thinking to plane waves. And I was fortunate to find there just soon he was a graduate student. And we developed a close form again three dimension without any shape approximation that's the momentum space approach. It was applied very soon by just soon to silicon and silicon surfaces and then by me to molecular to molybdenum and tungsten solids and we looked at lattice constant pressure and so on. And this to some extent can be viewed as the beginning of the modern era of first principles in the sense that it became possible in coming years look at structures charge densities phonons pressures and so on and so forth. And the paper, actually, many people asked me, why did you publish the paper in the British Journal, so a journal of physics see, and not in the journals that were then more contemporary in the United States at least, like physical. And the answer is because the paper was rejected. It was rejected by physical here is the referee report of the very first paper was submitted in December 78 that's a paper on the momentum space plane wave forces. And the referee said the journal space is really too valuable to devote an entire paper to routine algebraic manipulations of direct interest to only few computational scientists. And he kept saying I am inclined with the British accent I'm sure to be firm on this recommendation. Anyhow, you know, in the last 40 years we try to figure out who the person is that we know, but in any event, I want to say to you know all of the students in the postdocs that if you get your papers rejected really it's very normal. Even the papers that eventually were successful were rejected, not because of correctness or incorrectness. But because 40 years ago the perspective was that getting the right expression for computational solid state physics wasn't an important thing. So with this, let me move on to what I wanted to talk today about and this is the application of this ideology to oxides of 3D oxides. My collaborators are Giancarlo Trimarchi, Julien Varinion, G1, Shihan Lu and Sasha Malay, and the work was done at the University of Colorado. So those oxides, as you many of you know, have a very broad range of properties. There is the low temperature phases, which are usually ordered magnetically and there is a high temperature basis. You can have a different D-electron counts from ytrium, titanium, lanthanum, manganese and so on. The structures can be orthorhombic, cubic, monoclinic. The spin order at low temperature can be very diverse, antiferromagnetic, spiral, ferromagnetic, and mostly there are insulators at low temperature. And then at high temperature there is spin disorder and you give the paramagnetic phase with all kinds of similar properties except some insulators can become metals. So this is a very, very rich group of materials. The interesting effects are orbital ordering, mass enhancement, disproportionation, gaping in systems that have an odd number of electrons, not expected to have a gap, and so on. The cousin of the oxide perovskite are helite perovskite, like cesium, tin, iodide, and I would like to include them as a group because they have the same structure, similar phenomenology, except they do not have D-electrons and therefore cannot be blamed for being correlated. Nevertheless, it's a good point to measure everything else relatively. Issue number one, the traditional textbook view on gaping with an odd number of electrons. So when you look at all of those oxides starting from the days of a mort and later Haber, at low temperature they are antiferromagnetic and they're insulating most of them. And everybody says of course they're insulating because there is long range order, the spin are ordered and they resale doubling. But then at higher temperature there are paramagnetics, and the question is, why would a paramagnet still be an insulator given that it does not have long range order? Everybody expected them to have zero gap, but experimentally, as they just saw, they had band gaps of 2, 3, 4 or 5 UV, so it was not a simple mistake, it was a huge mistake. This was discovered already in the 1950s and this conflict between band theory and experiment was really the main driving force for the development of highly correlated methods that were based on the ratio between the inter-electonic repulsion and the bandwidths. I mentioned in passing that the paramagnetic phase that was considered in those days was actually interpreted not only to have zero magnetic moment, but that each atom has a zero magnetic moment, and this was a special feature in terms of how this thing was built. Here is a modern example of say cobalt oxide band structure. If you calculate the paramagnetic phase in the same style they did in those days, you see that the Fermi level cuts through a band and it's a method. Everybody knows that cobalt oxide is a wide gap insulator. So this is the example of the conflict that started lots of questions. What is the, what did the literature, if you look back at the literature from 1960s, 70s, 80s and 90s, what is the impression of the literature of this conflict that was really shocking according to the literature in those days. The impression as you'll see is that the correlation is everything that DFT does not get right. And the impression was that DFT fails for most 3D oxide and most properties of 3D oxides. Since then, I would say that there was a certain fashion that in the last 30 years to see correlation everywhere. It was the correlation induced syndrome. To get papers published in many good journals, it was very helpful to say that there is correlation induced orbital order, correlation induced band competition, correlation induced semiconductor gap, correlation induced displacements, correlation induced disproportionation. In short, more and more effects were re-baptized as correlation induced. Le Figaro, Le Figaro always, they're always right. So they said that the standard DFT and DFT plus you, I'm kidding, Le Figaro did not say that. I said that. But this is you find in the literature in Fisere. They said DFT fails to describe the phase diagram with DFT predicting that all compounds remain metallic and disproportionate. These results established that strong electronic correlation are crucial to structural phase transitions and methods beyond DFT like DFT plus you are required to properly describe them. About nickel oxide, a very simple compound, they say that conventional band theory which stressed the localized nature of electrons cannot explain the large gap predicted in nickel oxide. For this reason, nickel oxide has long been viewed as a prototype mod insulator. Le Observatore Romano, I'm sure they also had a strong dogma on that. If you look at the Lanternum cap rate, people in Fisere actually said while density functional band theory is the workhorse of material science, it does not capture the physics of more charge transfer insulator transitions or that these methods, LDA, GG, GGA and so on, usually fail to describe correct electronic structure of electronically correlated paramagnetic materials and so on and so forth. Well, Pravda said that the properties of all of these practical things like catalysis, ion batteries, ferroelectrics and so on, which are very often transition metal oxides cannot be trusted if you calculate them by density functional. The functional that failed is the N density functional method where N means naive. So the N density functional approach uses the view that each unit cell has a single motif, for instance an octahedron or tetrahedron and use the smallest possible number of highest symmetry unit cells single motifs to calculate the electronic structure. There are three kinds of systems that we are talking about systems where the atomic positions are not ideal. These are positionally monomorphous. For instance, paraelastics or a case where the spin degree of freedom is not a simple spin up spin down but maybe local arrangements of spin. And then there is a dipole monomorphous systems that are parallel. So for those three classes where the para phase can have some kind of order, the opinion in the literature was you cannot use mean field theory such as density functional. What I'd like to show you today is the results of our work from the last two years. Again, using total energy. I'd like to show that the density functional assumptions, assumptions indicated here. And these assumptions led to the failure of the approach, rather than lack of correlation. So assuming the simple naive DFT created many predictions of metallic state and systems that are not methods. So further I'd like to indicate that there is no need to leapfrog to jump from naive DFT to dynamically correlated methods and skipping DFT. There is a possibility to check what DFT uncomplicated by naive approximations. What DFT would say, of course it's not perfect, but let's find out what DFT says, given that the initial accusation of DFT was based on something that is not really DFT it's a naive form of DFT, which I would explain. Now it maybe makes sense to mention that when you choose a unit cell and when you calculate band structure, the size of the unit cell to some extent is your choice. And most of us try to use the smallest possible unit cell. But basically you can see that, you know, if you look at the crystallographic literature for halide perovskite or oxide perovskite, very often they take the unit cell of the cubic structure, whether he's a simple octahedral, no tilting, no motion, everything looks as an ideal structure. And this is the crystallographic picture, which of course takes an average of a large coherence. Now there are two kinds of, so let me say, for instance, let's say you take this cubic material, which is a phabodymium tin iodide, that's perovskite, you can make a supercell where each unit cell, unit cell, unit cell is identical. And if you do that, and you check what is the tilting angles of the octahedral, in the simple case of no relaxation, you find a single tilting angle of zero degrees. If you look at the displacement of the b-side, you find a single value, zero, and if you look at the octahedral volume, you find a single value. On the other hand, if you take this supercell and keep the outer cubic shape, and you allow the atoms inside the cubic shape to relax, you find a distribution shown in red of tilting angles between zero and maybe 10 degrees. You find a distribution of b-atom displacement, and you find a distribution of volumes. And if you check the total energy, here I show the reduction in total energy due to this relaxation. So obviously, these kinds of systems are not, even in the static picture, before you put temperature and molecular dynamics, already they don't have the simple structure that was envisioned in the 1950s, 60s, 70s, and 80s. What about speed? If you take, for instance, iron selenide at the trinal phase, in the paramagnetic phase, you can imagine, like in the old days, that it had a single magnetic moment and it was zero because it's a paramagnetic here. On the other hand, if you make a supercell of iron selenide in the correct structure, but you let the local magnetic moment be a variable, you find that the total moment is zero. But it is zero, not because each moment is zero. It is zero because the vector sum is zero. You find the distribution of moments that are negative, the distribution that is positive, and the sum is, of course, zero. Again, this is no longer the monomorphous view of a single motif, it's a polymorphous view that is a distribution of motifs that, of course, has the right general sum. How about para-electrics? You know, if you take para-electric, like barium titanate, in the cubic phase, cubic para-electric, can you believe it's experimentally piezoelectric? Now, cubic cannot be piezoelectric. I mean, it has inversion symmetry. So what is happening? This is not a simple mistake that the shape of the sample was wrong or that there was a hole in the sample, but if you actually calculate the dipoles in barium titanate, in the supercell of barium titanate, for instance, 4 by 4 by 4, you keep the shape cubic, but you let the dipoles relax the minimum total energy, you find certain distribution of dipoles along tricky telegraphic directions. And guess what? This system is no longer centrosymmetric. After you do that, the para-electric phase loses inversion symmetry and it definitely can be piezoelectric. So let me try to summarize basically what was the point of divergence. The early mistake that the theoretical community did was to look at the average global structure taken from a crystallography and to take this structure and use it as input to do band structure calculation. Rather than consider the fact that the structure can have local motifs that are not captured by XRD, especially if you don't collect sufficient intensities. So the basic point is the following. Global symmetry equal local symmetry in cases where the repeat unit is the same. For instance, in gallium arsenide, if you calculate the total energy for a primitive cell or for 10 primitive cells, you get the same answer per cell. So that is where global equal local. However, in many oxide perovskite and halite perovskite, the global symmetry can be cubic, but the local symmetry can deviate from cubic. In this case, there is no sense of averaging the different configurations in the system. You cannot average a chicken and a horse. It's meaningless. The naive DFT approach of many, many years used the monomorphous point of view to describe a physical situation that is intrinsically polymorphous, meaning has a distribution of local motifs. Just to put it slightly formally. If you have a system that has many configurations, SI being a structural configuration, you can take the average configuration if you want to call it as zero. One way to calculate properties P like moments or bench structure is to take the property of the average configuration, the property P of the average configuration. This of course is not correct. The correct way to handle an ensemble is to calculate the average of the properties of individual configurations. So this is the property of a given configuration SI and the physical property of interest like Ben structure or moments is the property, the average of the property, not the property of the structure. As simple as this sounds to you, this is the mistake that was made over and over and over again. Well, there is a simple common sense test to see what is happening. If you take all of those oxide with 3D elements in the paramagnetic phase, you can ask yourself, what is the difference in total energy between the naive approach that takes the average configuration and the actual total energy where the symmetry is broken. That is you have different distribution. Here is the difference in total energy. This is a millivolt. So, for instance, the mistake you make is 1.9EV, 1900 millivolts can be 4EV, can be 2EV, can be 3EV. In short, if the naive DFT gives a physically irrelevant high energy metallic state, because you take the wrong configuration. This was used, the fact that you always get a metallic state was used to suggest that dynamically correlated methods are really needed. Issue number three, what do you have to do to get correct symmetry breaking when the system wants to break symmetry and lower the energy? Well, you have to do three things. First thing, if you talk about spin system, you have to acknowledge that the spin can be spin polymorphous. What do I mean? In an antiferous magnet, you can have a single spin configuration. Each spin up is surrounded by spin down and each spin down by spin up. However, you should let the system a possibility to have a distribution of such configuration. Sometimes up is surrounded only by down, sometimes spin up is surrounded by some up and some down. This is, you should allow this as a possibility to do that you have to use a bigger unison. And how you can do that one possibility is to treat the spin, for instance, with a special quasi random structure. If you do that, before you do that, cobalt oxide is naively a metal. But if you allow the spin to have a distribution, even the same crystal structure, the same everything, you have a band gap of 2.4. Without any additional work, just allowing the spin to break symmetry. You know, many people wrote papers that Ethereum titanate and Lanternum titanate are metals in DFT. Guess what? They are metals only if you take the average spin structure. You see the Fermi level is in the conduction bed. In Ethereum titanate and Lanternum titanate. In reality, if you allow the spin to have a paramagnetic distribution, you have a very decent there. The second thing you have to do in order to do it correctly is to take care of positional polymorphous. In other words, what if the system wants to adopt different local environments? You know, there are a bunch of papers over the years saying that strontium bismuth oxide is a metal. Guess what? It's a metal only if you neglect the possibility that some octahedra would like to double and some octahedra would like to say a single octahedra. You see the bench structure is a metallic state where you have a single environment, but it has a gap if you allow the system to alternate between small and large octahedra. Just positional relaxation. Lanternum titanate, many people said it's a metal in DFT, turns out it has a yanteta distortion, everybody knows that. And if you don't put a distortion, you have a metal. And if you put distortion, you have an insulator. So all of those things have to be taken care of. The third thing you have to take care of is to use an appropriate exchange correlation functional. Imagine if you use a functional that gives you very, very diffused orbitals. If the orbitals are very broad, they cannot see local environment. So the functional has to have a certain amount of elimination of self interaction correction. For instance, if you take lithium titanium oxide. And you use a functional like PBE that gives very extended orbitals, the system is a metal. However, if you use a functional that has the ability to create some localized states, meaning orbitals that are not totally diffused, you find that the system is an insulator. And of course you find some titanium atoms that can be characterized as titanium four plus other titanium three plus. In other words, there is a natural distribution of local environments. And you have to do all of those three things to get that right. Let me go over some of the interesting results and what happens when you allow symmetry breaking and density functional doing three things spin symmetry breaking if it wants to do that. Positional symmetry breaking and dipolar symmetry break in each system. It's a different symmetry breaking. The point is that symmetry breaking density functional is you'll seem surprisingly explains effects that used to be exclusively explained by highly correlated methods. Mean field density functional with symmetry breaking can explain the facts that we all thought are exclusively highly correlated effects, not all, but many of them. So let me give a few examples. Number one, general trends. This table is not meant to look in great detail, but here is the reference where you can find the table and you can zoom into different aspects of the table. It shows you for all of those 3d oxide indicated here for low temperature phase and high temperature phase. It shows you what magnetic state it has for a ordered and disordered what crystal structure it has, whether it opens gap or not. And every time you see a green checkmark, it means that DFT with a large supercell allowing symmetry breaking actually gets the right answer. Result number two, providing chemical and physical insight about what causes that. You know, if you do the classical Mott-Habbard theory, there is one universal reason, one size fits all, why you open a gap. And this is when u is larger than w. However, if you do supercell symmetry breaking, you find many, many different reasons. You find that calcium manganate and lantanum iron oxide, the reason they have a gap is because the sub-shells in this system are actually half filled, exactly half filled. However, if you look at other compounds like etrium titanate and lantanum manganate, if you analyze the DFT results, you see that the lifting of electronic degeneracies happens due to octahedral rotation. However, if you look at two other compounds, calcium vanadate and strontium vanadate, you see that the octahedral rotation is quantitatively so small that it is not enough to open a gap. And therefore these systems are indeed metals. Mechanism 3 is lantanum vanadate, where you have two electrons in the T-shell, and the reason DFT opens a gap here is primarily due to orbital broken symmetry. Those two electrons are not positions, two-thirds of an electron in each of the three degenerate partners, but you have occupation of one electron, one electron, and zero electron. So this occupation of broken symmetry, and you can do that with DFT. Finally, mechanism 4 is that calcium iron oxide in these compounds actually can have a gap because of disproportionation, that some octahedra are small and some octahedra are large. And this is a three-dimensional analog of papyrus instability, it opens again. Result number three. Must we have u? If you like it, you can have it. But you can do the calculation of those eternal compounds. By setting u equals zero, you need to have a little bit better functional. So this is a scan functional. And these are the density of states of different compounds indicated here. And the dashed line are indicating the VBM and the CBM, and you see all of them have gaps as they should have, even where u equals zero. So you don't really have to put u to see any gap there, but many people like to put DFT plus u, do that in good health if you like, no problem. But it's not a necessity. Result number four. Structural symmetry break. You know, those octahedra in cubic compounds and other compounds can have tilting and different modes indicated usually by this notation of glass. But these are allowed modes. In the old days, about seven years ago, papers were written that in order to get a yantela distortion in lanthanum manganate, you really have to have a correlated method like DMFT. And if you use a regular DFT, then there is no relaxation in this. Well, of course, the DFT used in those days was non magnetic DFT. Of course, this means nothing. But if you do DFT allowing the system to have yantela distortion and have a spin order, you can find that, you know, DFT there is of course energy lowering due to those distortions. And the energy lowering is very reasonable. And you can compare the experimental distortion to those computed by DFT with symmetry breaking and basically that chemical trends are correct. Result number four. Mass enhancement. Mass enhancement is an effect that is sort of a curious effect. This is, if you actually measure the effective mass of electrons or holes in a bunch of materials. You see that in the summer tails, which I will illustrate now, the effective mass of electrons or holes is bigger than the effective mass as obtained by simple textbook DFT. The ratio between them is 1.2, 1.3, 1.5, even two, and it's called mass enhancement. And it's subject that many people in the high to sea field and other fields of correlated materials were very occupied by. What really causes mass enhancement? I have to make a small comment. And this is, if you want to calculate those quantities, you have to be aware of the fact that if you do a big supercell of a typically 200, 300 atoms. Of course, you get very complicated band structure, you get a spaghetti looking band structure. And therefore, many people were discouraged by supercells, but in fact, you can do an effective band structure where you unfold this band structure into the primitive Brillouin zone. And if you do that, you get a band structure, a normal DFT. It has a coherent part, an incoherent part, but bands are not infinitely sharp. They have some kind of fuzziness. Okay, so this is the effective band structure. So the effective mass you get is the effective, effective mass. It's the effective mass obtained from down folding the big unit cell, because the big, the big supercell can have relaxation can have spin arrangement can have short range order. All of these will be folded into the primitive Brillouin zone, and you get an effective mass that includes all of those mean field effects. So if you take strontium vanadate, if you calculate the band structure of this compound in naïve DFT, you see the conduction band is relatively very, very broad. And the width of the conduction band is 2.6CV. And experimentally, it's only half of that. And everybody said, oh, this is a tragedy. DFT is really terrible because the band is too wide, and there is no mass enhancement. But if you take strontium vanadate and you do it in a large supercell, you keep the structure cubic, you keep the lattice constant, everything. Okay, but you relax the spins and allow them to create local environment in the paramagnetic phase. And then you do unfolding. Here is what you get. The width of the conduction band went down to 1.6CV just because of local spin arrangements. It's a paramagnet. But if now it's, you know, instead of being 2.6, 1.6CV. DMFT with a lot of work gets a bandwidth of 1.3V. In short, you can, instead of doing GW plus DMFT, you can do straight DFT and you already capture 80, 90% of the mass enhancement. Even if you take a compound like cesium lead iodide, which does not have any suspicion of correlation, no D electrons, no open shells. If you calculate this band structure without making a big unit cell, the ratio, the mass enhancement is one, meaning no enhancement for electrons and holes. But if you do a big unit cell, keep the shape constant, cubic in this case, and relax the octahedral, allow them to rotate, then calculate the band structure. You have a mass enhancement of 70% or 100%. So this is the mass enhancement you get for mean field effects. Result number six, I think that's the last one, maybe. How about, can we measure the local environment that I'm talking about? Can we measure the local environment? Well, there is a method called pair distribution function, and it measures the distance between atoms in a short range distance, say up to say seven angstrom, but you can also go up to say 20 or 30 angstrom. If you take the compound, so this is for instance a cubic perovskite. The black line here is the experimental PDF. The red line is the PDF calculated from density functional using a large supercell that includes all of those relaxations. The green line is the difference between experiment and theory, and it's not fantastic, but it's pretty good. Why pretty good? Because if you use a primitive unit cell taken from X-ray, the error between experiment and theory is tragically large. So you can actually see those local environments experimentally. It's not the figment of my imagination. If you take a magnetic material, a superconductor like iron selenite, here the experimental PDF is shown in the blue line for short range and for long range. If you calculate that with a simple primitive unit cell, the difference between experiment and theory shown in green is huge. But if you allow spin to adopt local environments, as well as iron to adopt local environments, the difference between experiment and theory shown in the green line is very small. So this is density functional giving you the local environment which X-ray did not see. X-ray missed that. That's why distribution functions so well. I mentioned that we can do all the calculation I mentioned so far are static energy relaxation where you do this large supercell. But you can take these static things and actually start the molecular dynamics where you look at the free energy in those compounds. And if you do that, here is what you get, for instance, calcium titanate. The blue results are the local displacements obtained from static total energy relaxation. And these are basically intrinsic displacements, not thermally induced. They are a reflection of the will of the chemical bond. If you heat it up to 2,000 degrees shown here in this brown color, you find a distribution of the same kind of distribution, but the distribution is of course much wider. So what you see, the minimum of the internal energy you and the minimum of the free energy are related because the internal energy gives you already the fingerprint of what the system are going to do when you make it a bigger one. So in a conclusion, let me say now that we know that all of those traditional metal oxide have a band gap and you don't need special effects to get all of those things. You can get relaxation, you can get yantelor effect, you can get this portionation and so on. This opens the possibility to actually do legitimate doping calculation for all of those correlated materials. And just to show you one type of doping calculation that surprised everybody. We are all used that when you doping a material by electrons, the Fermi level should move towards a conduction. And when you dope it by holes, the Fermi level should move towards the valence band. Guess what? If you take an iterium nickelator and you look at its band structure, you'll see that the first empty band is actually a very special band because it has in it trapped holes. If you look at the charge density of this open band, you'll see that it has holes on those atoms, on the nickel atoms. If you dope the system by electrons, the electron goes into the empty band, it sees here a trapped hole, it recombines with it, and then this band starts moving towards the valence band. You doped with electron and tight and the Fermi level moves to the valence band. So this is electron anti-doping. And the same thing happens if you have a natural band that actually has trapped electrons like the titanium, you dope it with holes. The Fermi level is going to move to the conduction band. So these are just examples of really unusual things that you find when dope quantum materials in this system. So let me conclude here and tell you where we are in this journey that we started about two years ago. Of course, there are many, many more things that we have to learn about. What we learn is that the density functional that was practiced for many, many years, mostly by people who promoted highly correlated methods, that the density functional tried to use a simple monomorphous picture, meaning a single structural motif to describe a situation that is inherently inhomogeneous polymorphous. The failure of naive DFT is not the failure of DFT itself. Symmetry breaking, and this is a very deep conclusion, symmetry breaking in mean field theory, spin or positional or dipolar captures effects that in symmetry unbroken theory would require complex correlation facts. And that's very interesting. If you use a small supercell, small unit cell, and you have the electrons interacting strongly in a very small space with only one nickel atom or one manganese atom, you'll need very complicated rendering of an electron electron correlation to get it right. But if you spread the system by making replicas of the system and have spatially nomogeneity, then you find that the core what used to be strong correlation is becoming actually we correlation inside the DFT. And that's the reason that's my current view why they actually it works, having broken symmetry DFT. Finally, 3D oxides, I would say are complicated, but not necessarily correct, but complicated is good enough. Thank you very much. Well, thank you very much. So we are running a bit late, and I see already several questions so there are at least seven but I'm afraid we don't have the time for all of them. By the way, can you hear me probably? And so let me just start with a question by Tanusri, if you could unmute. Yes. So the question I would like to ask is one of the, you know, the so called failure of DFT and etc that was kind of put up in the literature is like also a comparison with the photo emission data. Not only just the bank they are opening, but there are these correlated metals, where at least the photo emission do see appearance of some kind of a subband which was interpreted as presence of Sometimes what, what kind of band again. So these subbands were some high energy subbands and of course DFT didn't had any signature of such. No, not high energy. Those bands, those bands were split of bands that were actually split about one and a half of it down in energy. Right. That's what I mean by high energy because they were not really at the family level. So, in your disorder calculation or the inhomogeneous local environment, do you see signature of those high energy bands. So if you take systems, let me take an example here. So if you take systems that need to have some nudging in order to create impurity states, then you see those split of bands. You know, we know that in physics of impurities, we usually have a state split from the conduction band down going down by one or three V due to impurity potential. However, think about it as electronic impurity, not really impurity. If you have a conduction band that looks very innocent, doesn't matter right now if it's a metal or not. However, if this conduction band has a localized particle in it, in this case a localized hole. Then if you actually let this localized hole minimize the energy, it would shift a state which I show here down by one of you or so into the band gap. And this effect can exist entirely in a mean field theory and all of the cases of systems that have multiple oxidation state like titanium and nudging like that creates a deep impurity state except the impurity is not chemical impurity. It's an electronic impurity. And in my opinion, this is the reason why strontium vanadate and lithium titanate have those split off state of one and a half UV shifted down for the failure. Okay, thank you. Okay, there are many more questions but I think we cannot go much further than this. Okay, possibly let me just take a first question that was raised by Nikitas Gitopoulos. Nikitas, can you speak? Can you unmute yourself or not? Possibly not. So I don't know how to make it from here. So let me just talk now. Yes. Thank you very much for your talk. My question is this, that in physics we try to simplify the model as much as we can. That's why we have this monomorphous as you said picture is in the effort. And we try to simplify but it's better to be right than to be wrong. No, I know. Exactly. That was the point of my question. We want to get the right answer which is insulating behavior for the correct reason. So any unteller effect and disproportionation and orbital order, all of these effects disappeared with a simplification. Yeah, my question is this. Suppose that we can do a high level theory like stochastic CI for solids. For example, Ali Alavi's work or a couple of cluster for solids, and we have this monomorphous picture with one unit cell. And we did a calculation. Do you think that the DFT calculation would agree with the high level theory? Or does it not matter that at that level we do not have agreement, but in the real system which is much more complicated, we might restore the agreement but it could be for the wrong reason. Yeah, you know this is really a subject for discussion, but let me say the current view we have on that. If you look for instance at current results of a symmetry broken DFT and compare it with coupled cluster for instance even for molecules. You see very surprisingly the following thing. In approximate implementation, a correlation that has been dynamic in the symmetry restricted view can become static in the symmetry broken view. You all know the cases from the 1970s where diatomic molecules, people were breaking the symmetry of diatomic molecules, and got by simple Hartree-Fock or unrestricted Hartree-Fock method, got results that were much more similar to the highly correlated molecular calculation than people expected. The current view is that it is entirely possible that one form of correlation, dynamic correlation that is obtained by in small volumes, can actually transform intrinsically into weak correlation if you spread the total electron-electron interaction, not on one atom, but on more atoms. I understand. Thank you. So I really think it's a method of representation. You can represent the problem as dynamic, small volume symmetry unbroken, or represented as static symmetry broken. In the exact implementation of both methods, the result should be the same by ergodicity. In the approximate implementation where we actually live, I think that the result should be very similar, and there are more and more cases of cases where we find surprising successes of mean field relative to couple cluster. However, the success is not surprising where you realize the mean field actually broke many symmetries. Okay, thank you. I feel we have to stop it here if I want to survive this session myself. And I thank you very much. You're welcome to write to me questions. I'm happy to talk offline. Yeah, you can. There will be opportunities to talk to Alex throughout the workshop. And I know that I can see many more hands raised and many more questions there. Probably he can have a record of the questions that have been written, the question and answer. So he may be able to respond to that. Give me the email also. Yeah. Oh, the coffee break. You know, you can pass me the coffee. Yes. Okay, well, thank you very much. And I think I'll pass this on for the next. There is a pause now there is a coffee break, I think, but I don't know whether we have already. Let's take two minutes and start at 230. So two minutes just to stretch your legs and then we start again with the next session with a contributed talks. Okay, I thought you were going to talk to the coffee. Okay. Thanks. See you in a moment. Thank you. So, see you in a moment. So, we can use these two minutes. You can prepare your presentation. You see. Yes. Okay, so here in Trieste, it is 230 now. And Nicole, I said we would continue after this very extended coffee break at 230. So hello everyone, my name is Ralph Gebauer, I'm from ICTP. And I will chair this first contributed talk session here of this total energy conference. I think we should start directly. So these four talks now are each talk will be say 18 minutes, and then we foresee three minutes of questions I don't know how many questions we will have time to deal with. And I recommend everyone if you have questions type them in the Q&A on your screen and then we will see how many questions we will be able to ask. Okay, and to the speakers I will give you a sign or interrupt you five minutes before the 18 minutes are over just to give you an idea. Okay, so every talk is 18 minutes, three minutes of questions. And so our first speaker is Antimo Marazzo from the EPFL in Lausanne. And he will now talk about Kane Mele and emergent topology in JQ Tinga it whatever this is. Okay, so please go ahead, Antimo. Thank you. And so let me start by just a big thank to everyone for being here today. Also to the organizer for inviting me here today to discuss about my work. I did my the title of my talk is Kane and Mele and emergent topology in JQ Tinga, or at least this I think it's the right way to pronounce it, I hope. So, I will discuss a lot about this specific material, and it's interesting physics and just properties especially I will discuss a lot about it. The topology of its electronic structure, and, and I will discuss a lot about topology insulators so I will start by introducing one type of topology insulators that is called quantum spin on insulators. Essentially, they are two dimensional systems that are time reversal invariant so they are not magnetic, and they are described by this Z two topological invariant which is essentially an integer, such that if you have Z two equal zero you're just discussing a trivial insulator or a trivial semiconductor. If you have instead Z two equal one you have this quantum spin on insulator that are characterized by the presence of edge states that are localized here at the edges. And they essentially are helical states crossing the bulk band gap and they are to first approximation made of opposite spins channels going to opposite directions which you see here in the on the left hand side picture. So this materials they're very interesting for a number of reasons but they also have interesting properties from an application point of view. So here list of the three most relevant properties they have no dissipation electron transport taking place at the one dimensional edges. This edge states are spring momentum locked, and also all this physics is somehow robust in the sense of the protected by this non trivial topology of the bulk electronic way function. And people using this and other properties started since the very early days of the field to come up with different kinds of devices which you see here is called topological field effect transistor, which is nothing that a switch essentially so you place your two dimensional system between electrical gates. And through the, an auto plane electric field, you switch off the topological phases so you close the band gap and reopen the band gap in a trivulence waiting phase okay. And this was something was done by theories just up to a few years ago and now this day just starts to see this ideas also arriving to the experimental laboratory so this isn't what I show you here on the right hand side is an example of a transition of this kind realize in experimental setup essentially. So the field is progressing really fast it's very exciting, but is one major bottleneck and that is the oldest fundamental search, and even potentially future technology applications are really hindered by the variety of this material so they're just a handful of system where this effect has been served. In this case it doesn't survive on the temperature okay so we, what we decided to do was to go for a computational screening for this material so to try to find with first print principle simulations. Noble candidates that could possibly offer for the state of the art materials, and we decided to focus on exfoliable material so materials that you can think of peeling off like you peel off graphene graphi and was to exactly where I like and I like. And actually a wider effort on to the materials was led by an equal money at the PFL, we were essentially look. We found 1800 materials. They could be exfoliated by starting up for more than 100,000 three dimensional crystal structures, and this set of 1800 materials and look for quantas minutes related to see here we compute a bunch of properties, we dance function theory and spin over coupling. We also use the topological invariant z to invariant a phonos to assess stability, we also do some kind of refinement with many body perturbation theory at the G zero zero level. And at the end of this process what we get is essentially a selected pool of candidates and namely 13 candidates, which is what you see here this is a plot of the versus the evidence atomic number of the structure, you see that we found a number of candidates, some of them were actually already known in the literature, but some of them were actually noble and they were proposed by us in this work. And in particular we focus on this single material here, which is made of platinum American Serenium and this is what is called recruiting a, and this is where I will spend the rest of my talk on. So you're putting is actually a mineral it's very interesting it was discovered only basically 10 years ago by an expedition of geologists that went to Brazil to find over minerals they went to a Brazilian mind, and they found they found recruiting a they characterizes it's a layer mineral as a set of mercury platinum Serenium, it's clippable, and we did essentially vandervals no local and vandervals density function to simulations. And we show that you can actually potentially exfoliate this material into monolayers Okay, and once you study the monolayers you discover something very interesting that is what we call a cane and male quantus and I will discuss in a moment what a cane and male model is, but for now just, just tell you that this means this material looks a lot like graphene, and this is what you see here. And with this very simple band structure, this is a band structure for the bottom layer dense function to your PV without some of the coupling and what you see is that the right of the Fermi level here. Exactly, you know, as you see in graphene, and exactly as in graphene, if I now to non spinobic coupling, and that's what I do here with the green lines. You see that there is a band gap opening. Okay, now the difference with respect to graphene is that in graphene the band gap is vanishing small is like microlectrum. Okay, and that's because the spinobic coupling there is extremely weak in carbon spinobic coupling is really really small. So everybody, you know we all talk about graphene as a direct semi metal, not as an insulator. And it's still in principle there, just that here the fact is much, much stronger and we try to understand a little bit why this material looks like graphene, and you start to get hints of this by looking at this model. So we did maximum localized many functions that describe basically the highest occupied and the lowest and occupied bands so the lines that you see in the band structure are many interpolation and they perfectly match the dots that represent the direct equations. And this this model essentially is made of S orbit as a mercury with a little bit of contributions coming from the D orbital supply team. Okay, and this mercury atoms that are green dots here. They essentially form a bottle honeycomb lattice. So you start to see that this looks a lot like the the model if you would build to discuss the pie bands of graphene you know in terms of this pie said orbitals that sit on carbon. But, but I said what I introduced this, I mentioned this came in a model and here I want just to give you a little bit of introduction what actually the model is so back in 2005 Ken and Miller. And introduces very simple tight bonding model for graphene so you have honeycomb lattice you have a nearest neighbor type bonding hoping. And then what I discovered is that if you add spin of the coupling and in graphene this happens through a second nearest neighbor hoping that is complex and this is the second term that you see here in the Hamiltonian. What happens is that you get the graphene the work on. Okay, and then if you study a ribbon of everything you see the appearance of helical states localized at the edges. What's special about this edge states is that essentially, they're very robust perturbations for instance, let's say you break inversion symmetry okay for instance you turn on a rush for the coupling you turn on an on site inversion on site term. And there is a wide region in the parameter space, such that this system remains a quantum spin on it later okay. Now of course if you go beyond that region okay the system. You know you go out from this stability region you you you get that the edge states and you go to a trivial insulating phase. But this was actually the way the very concept of quantum spin whole insulating phase was introduced this is was, you know, where the phase this topological phase was introduced and defined okay. And it was through this model and through graphene and then later on people realize that if you want to have strong quantities little there are. There is different mechanics you can you can exploit is called Ben inversion, and it was introduced by there and I think you can send. And this led to the famous discovery of the quantum effect, experimentally in the mercury telequantum was, but this still remains the foundation of it and just that it was not exploited as an effective way to realize this effect. What you show with this material is that you can use the graphene mechanics this cameo model to I have a robust. Quantum spin is right in place and, and this is what I tried to show here, here I tried to show really we have this cameo me physics playing in this material this is the best structure with problem coupling. Now I'll show you a comparison with the best structure that you get, if you use a maximum and as many a functional Newtonian, but just the first news name you see, even with spin over coupling still you have a dear cone. You actually have to go to second years naval hoping to open a band gap in this material. And now if I turn on all the higher order openings you just renormalize a little bit the bang up around K and a little bit more around the gamma point. Okay. And so, this says the spin over coupling really comes from a second years naval hoping. What's what we what's left to assess is essentially if this hoping is really of the K and metal type. And this is what we do here on right and side you see the comparison of the second years naval maximum as many function with Tony and with it came and middle term this is extracted from that. And you see they perfectly match in the low energy region. This is a little bit of deviation around gamma. And that's a title accounted for if you include the other two terms that are allowed in type bonding a second years, but that's, there's a trivial real second years naval term and then there is an in plane spin over coupling. And so what these two pictures essentially put together. They tell us essentially is that in this system, the band gap and the quantum spinning phase originates from a spin orbit coupling that is a second years naval hoping term of the K and a middle time so the same term that isn't happens to be in your thing. Now, of course, if you want to be a little bit more quantity in terms of the band gap, you can't rely too much on PBE and if the so we went to G zero zero zero calculations and this is what you get an increase of the band gap up to roughly a half an electron which is of course 56 orders of man to larger than what you find in graphene but also what you find another on a system like arson in or of stunning or Germany, for instance, okay, and here on the right hand side to see this this the spectrum for a semi infinite electron with a whole mark of this topological phase so this presence of this helical states crossing the bulk gap. But after now just discuss, you know topological aspects of the train structure. Here I just want to make the point that this material also exhibit other interest in physics for instance, you have a coexistence of a robustness of this phase so you have a large gap, but also switchability and by switchability you have this idea that you place your material between gates electrical gates and through an electric field you switch off the face so you switch on and off the edge transfer this is a little bit like the what I shown you at the very beginning in in of my talk. This happens in this particular this this mechanism happens in a peculiar way because there's an interplay between spin of the coupling crystal symmetry breaking and the direct response. And in fact if you study this by just you know you freeze your time in configuration you just put an electric field. Nothing really happens in this material. But if you do the right thing so you let the atoms moving the simulation, you see that there is what happens at the center symmetry structure at zero field becomes a polar structure. And I feel there is a little bit of, there is a spontaneous polarization popping up, and this gives an additional contribution that helps to reduce the critical field so helps to do this transition to the face. So this allows to, you know, have a large gap still switchable phase. And I just want to hear with this want to give you a feeling that I've been discussing simulation but this material is real and you know and we are working on the experimental side to actually study this. This is an electron microscopy. Picture of flakes of flakes of the pudding. It's still not the moon layer we have working progress in the size this is a multi layer flake, but just want to show you know I think this is quite beautiful you see this honeycomb lattice made of mercury. And then there is a platinum atom in the middle of this of this honeycombs. And indeed we have been igniting a number of collaborations with experimental groups from single crystal synthesis liquid exfoliation FM transport STM Raman optics and our person indeed today I want to focus on just one of this collaboration, which is about our piece which is, you see here this under assault for the mission spectroscopy data, because indeed, we were really interested in the monolayer and that came and Miller physics of the monolayer but when we started to interact with experimental groups and we ended up standing also the box structure so the three dimensional crystal. What we found that there was another kind of topology in the three dimensional crystal. And what you see here is the 001 surface states that you get from markers you see this very special dear cones, dear crossings, and they, and this were sort of a puzzle for us because you know, we start from something that is a quantum spinole insulator in the monolayer. And then the bias is actually just a simple stack of this monolayer so it's a layer mineral. And the standard part of it that you can find in nature is, is what is called a weak topology insulator essentially you have surface states all around the sample and not on the top and bottom surfaces. Actually, five minutes just to actually what we found is that on this top and bottom surfaces. This is where exactly where experiments were performed. And, and this is where the word and the surface states that the hinted a lot of some kind of topology protecting them. Okay, so I think it was a kind of a material with the surfaces all over their crossing all over the place. And in particular, we show you now that there were two surface states on the top and bottom surfaces. This is what you see here. It's a comparison between experimental artist data, and the dance function to your simulated spectral density. And you see, there is very good accuracy. So the agreement is very good between the two between experiments in theory. And we actually discovered that there is actually a mere ocean number so protecting this phase, what we call the topological crystalline insulator essentially, and this was also independently, this simulation was also independently performed by other groups. I put the references here. And so this explains why there is that these surfaces are topological protected by crystal anthropology, but doesn't explain why this happens in a sense so what we wanted really to understand how do you go from a monolayer that looks like a graphene has this quantum density phase to a three dimensional system where this gap of K basically disappears. It goes back almost to a diracone. There is dispersion along the outer plane direction. And then there is this mere ocean number appearing in this crystal anthropological phase. And we explain this with one simple extension of the kinematic model. So, again, we came back to this maximum guys when you function technique we apply now to the crystal to the back three dimension crystal that the strongest coupling of their in plain nearest neighbor is one term that couples one when you function so one layer, and the other layer, these sits basically two layers above or two layers below that the strongest term. And this essentially the couples the even and odd layers and that's what you see here in the right on the right hand side of this slide. Okay, so with this one single term which is actually very strong, you can explain all this crystalline topology so you see here this is a scan going from lambda equals also from zero coupling so to graphene basically graphite and to lambda equal one to the to the budget putting it on the top panel shows the structure, and you see it you start from something that is has a dear console it's graphite graphene. And the more you go, we increase your coupling you start to see that there are surface states the red states appearing, there is a new diracone appearing. Okay, and this is mirrored by the appearance of a zap phase and indeed you can explain all of this by by saying that the essentially the surface states are the bound charges so that you want is due to polarization and that's what the exact phases that you're computing of effective one dimensional type bonding models along the stock direction essentially. And of course you can make the model a little bit more complicated, you can add spin off the coupling. And, and of course now you have to abandon the exact phase and you use instead, what's called a mirrored number and you see, you have a you can compute it and you have a mirrored number minus one. And that's, you can ask why it's minus one, why two minutes ago you said it was minus two, essentially because as I said this model the couples, even an odd layers. Okay, and once you put together all the, you know, you put back all the couplings. Okay, you start to include the nearest layer and particles in the breaking. Okay, now you have a mirrored number equal minus two or two, and you have this p of surface states. And it's actually a very good agreement with what you see in experiments essentially. And, and with this just want to sum up a bit so I'm trying to show you how this minerals you're putting is very interesting is where you have monolayers that large gap quantum spin on insulators with physics and a switchable topological phase. How if you study the three dimensional part and crystal is actually one of the rare examples of dual topological insulators so you have both a Z two and a crystalline topology and this is confirmed by our experiments. And finally, I just want to mention several on going also independent from us to retic an experimental efforts have been going on after our initial discoveries here that I presented today, the latest one just something that I found a couple of days ago on the archive that experimental actually this material becomes superconductor high pressure, while maintaining its topology needs surface states so you know I think this material will keep on being fun to investigate for in the short future. And finally I just want to acknowledge my collaborators this is the unit simulation team Marco Givertini David account in economy and economy and here on right inside you see some of the experimentalist we collaborated with and finally I want to thank you for your attention. Thank you very much and team off so in the name of everyone who you can clap in the name of our 600 people who are listening. Okay, so thank you very much. We have had the two questions typed in by Sonya Haddad. So let me see Sonya can you say something. I think we should be able to. Hello everybody. Yes. Thank you. Hi everybody I'm from Tunisia so I'd like to thank all the organizers for this nice workshop. So I have a question about your topic. I was wondering what is the origin of the complex next neighbor hopping in this compound and especially how to to I mean to confirm them since they are really this sort of magnetic local magnetic flux field in in the system. So we're going to the first part of the question so the reason why the second is essentially because you're building this maximum clients when you function model, and you discover that the bunny function becomes localized on two layers. Essentially you see probably here on the right hand side of the picture. You see it's spread all over two layers so it's a bit that applies. It's a matter of geometrical interference when you start to see the overlap between when you function you see that it's very, it's very favor this overlap between these two when you function that I that I picture here that are somehow aligned geometrical because of the angles of the, of them, of them between Mercury and Platinum. Okay, so it's due to the fact that this money functions quite localized. And I didn't fully understand the second part of the question if you said the magnetic flux the system is not really magnetic here. It's it's it's a it's a said it's a no magnetic system so you have what's driving the everything is basically it's not the coupling. Okay. And actually what what you see here is that if you are in the layer so in the monolayer or in the bulk again, due to different interference effects between different possible hoping so you have a consolation or not of the cannon really I made a spin up a couple of times. So this is why actually in the three dimensional buck, you don't really have a kind of a strong kind of a third essentially. Okay, so thank you very much. So we have two more questions but I've been warned by the organizers to stay perfectly in time so I will try to do my best. I'm sure that Antimo is available for questions also. We are aware that this online format is difficult for communication but we hope that later for example in the poster session and so on you will have many more occasions to discuss with people and you can contact Antimo by email or Anyway, okay, so thank you very much again Antimo. So we come to the second talk. Can you perhaps already share your screen, Miguel so we can see. Yes, I proceed right now. So the next speaker is Miguel Royo malz from Spain and he will talk about first principle calculations of bulk flexor or the bulk flexor electric tensor. Okay, so please Miguel also in your case I will tell you five minutes or so before. Okay, thank you for the presentation and hello everyone. And the work that I'm going to show you here today has been developed in the group of massimiliano stengel at the map in Barcelona. And it can be considered as a sort of culmination of a continuous research on this topic for over 10 years and involving several people apart from massimiliano stengel and myself. In order to introduce flexor electricity. Let me compare it with a more familial electromechanical coupling, such as the electricity is electricity is the microscopic polarization response to a uniform strain. And this a property which is only displayed by crystals that have a non centrosymmetric structure, therefore few materials have this property. Flexor electricity in turn is the microscopic polarization response to an strain gradient deformation, such as a bending of a material as illustrated in this figure. The interest lies in the fact that flexor electricity being described by a force rank tensor which is this flexor tensor is a universal property of all the materials. And moreover, it is expected to become more and more relevant at the nano scale where large strain gradients can be sustained by the materials. The first applications of the flexor electric effect were basically aimed at finding cheaper replacement for piezoelectric materials. However, more sophisticated applications have been reported over the years, which often exploit specific properties of the flexor electric effect. Here I have shown a few examples, which include the possibility of mechanically manipulating the polarization in a ferroelectric by application of a strain gradient with the tip of an atomic for the microscope. Or the possibility to modulate the transport properties of a two dimensional electron gas accumulated at an interface by bending the the terrorist tractor. Or the observation of an increase in the photo current generated in a photovoltaic device when again a strain gradient is applied by means of a string gradient deformation is applied by means of the tip of an atomic force microscope. In spite of these sophisticated applications, our fundamental understanding of the flexor electric effect is far from being complete. And the first principle theories are a valuable alternative in order to shed some light in this problem, because in an experiment, one typically does is to bend a sample and measure the transient current. However, the problem is that the result of this measurement is can have several contributions from the bulk of the material, maybe from the surface of the sample. But also, if there are domain words or defects in the sample, they can also have a final contribution to the flexor electric response and the unexperimental measurement only the overall summation is accessible. In this sense, density functional theory can be considered as a theoretical microscope that will allow us to individually isolate these these contributions. However, flexor electricity is a very challenging problem when it comes to be studied from first principles. And the main complication comes from the fact that the strain gradient deformation breaks translational symmetry, which prevents us in principle to use spatial periodic boundary conditions that are typically assumed in most first principles theories. A possible solution for this problem comes via the use of acoustic phonons near the gamma point or in the long wavelength limit. As we can see in these two figures, the atomic displacements associated with longitudinal acoustic phonon or a transfer acoustic phonon in the right remind us to the microscopic deformation associated with a strain gradient of the longitudinal type or a shear strain gradient in this case. This means that somehow we can extract the necessary information to study flexor electricity from acoustic phonons. And this is basically the approach that Richard Martin took in in the 70s in order to study piece electricity and later 10 years 10 years ago this group of authors used to attack the problem of flexor electricity. More formally, the method consists in describing a macroscopic deformation by means of an acoustic phonon, which is nothing but a deformation wave which is modulated by a phase at a given white vector cube. Then the method consists in expanding the response to this deformation wave in a Taylor series in the white vector around the gamma point. This way, at order zero, the polarization response vanishes due to the acoustic sum rule, whereas at first and second orders in Q, we recover the necessary information to study the piezoelectric and the flexor electric effects. Moreover, this approach has a crucial advantage, which is that we can perform all the necessary calculations on a primitive cell by using the tools developed for the density functional perturbation theory. In practice, we need to deal with a few intermediate quantities, all of them enjoying a similar Taylor expansion in terms of a translational uniform strain and strain gradient deformation. The first one is the electronic polarization induced by the acoustic wave. Then we have the atomic forces, which at first and second order provide us with the piezoelectric and flexor electric force response tensor. And then we have the atomic displacement that give access to the internal strain tensor, both the piezoelectric and the flexor electric one. Also, in order to simplify the notation in the next slide, I will assume this bracket convention to describe some lattice mediated tensors that depend on a sub lattice index and a Cartesian index. For example, for the bond effective charges given at a polarization direction alpha, I will represent them as a three n dimensional vector. Then, for example, for other quantities such as the force constant matrix, it will be represented as a three n times three n operator. Okay, with these ingredients, we can write a very similar formula for both the piezoelectric and the flexor electric tensor. They are shown here and as you can see in both cases, the response can be split it in an electronic and a lattice mediated part. The later is calculated from the product of the bond effective charge and an internal strain tension, which as I said describe the atomic displacements induced by the deformation. These internal tensors can be in turn computed by multiplying the force response tensors with the pseudo inverse of the force constant matrix. For the case of piezoelectricity, the composition stops here because the both the electronic piezoelectric tensor and the force response tensor can be directly calculated from the city functional perturbation theory. The analogy with flexor electricity stops here because for a flexor electric case, both the electronic flexor electric tensor and this force response tensor are not elementary quantities, but they can further decompose in a clamp tie on contribution which is shown here, plus an additional term which is written in terms of the uniform internal strain tensor. To understand the physical meaning of these additional terms, consider a crystal in which there exists an optical phonon mode which couples linearly to strain. Here this optical mode is represented by means of this red double arrow. And in case of a centrosimetic material, this will be typically an even mode and will be related with the Raman activity. A strain gradient on this material will induce in turn an strain gradient of the amplitude of this of this grad mode. And at the same time, this gradient of the grad mode will produce on the one hand an electronic polarization which in our formalism is quantified by this P1 tensor. And on the other hand, a set of non zero forces on polar modes which are quantified in this P1 tensor. So this can be interpreted as an indirect contribution to flexor electricity, and it will be active, for example, from the presence of Raman mode as the Raman mode of silicon, or from the oxygen octahedral tilt that occur in peroxide oxides, as I will show you in few slides. So at this point, let me review which response functions we can calculate in terms of density functional perturbation theory with a first principle code such as having it. These are written as a second order derivatives of the total energy with respect to an electric field, a phonon or a strain perturbation. And the resulting uniform response functions are well known. They are the dielectric tensor, the force response tensor, or the elastic tensor when the two derivatives are taken with respect to the same perturbation, or the non effective charger and the clamp tie on and force response piezoelectric tensor when two different perturbations are mixed in the second derivative. Recently, we have extended the capabilities of the density functional perturbation theory in order to calculate a new set of linear response properties. These new properties can be considered as the spatial dispersion of the uniform warmth. And in this case they are written as their order energy derivative with respect to two of the previous perturbations and a third derivative with respect to the momentum y vector. These are the new tensors that can be computed following this approach and the calculation of the red ones has been already implemented in AVID. And indeed, these four tensors are the new quantities that we need to calculate in order to build the bulk flex electric tensor. There are also these additional two spatial dispersion tensors that are related with the natural optical and acoustical activities whose implementation is underway. Okay, to illustrate the performance of our approach and implementation, I have applied it to the case of cubic strontium titanite perovskite. In this system, there are only three linearly independent flex electric coefficients that are shown here in the table which describe the polarization response to a longitudinal strain gradient to a transverse one and to a shear strain gradient in the third row. And I have shown in the right most column the total bulk flex electric coefficients for these three cases, whereas the fourth columns show the composition of each contribution. In the first and third columns, I'm showing the clamp tie-on contributions to the electronic and the lattice mediated flex electric tensor. And in the second and the fourth column, I'm showing that indirect contribution that was active in the presence of Raman active modes. As you can see, for the case of this high symmetry material, these indirect contributions vanish due to the high symmetry of the crystal. But what occurs if we take a look at the lower temperature polymorph of the strontium titanite, which is in this case a tetragonal crystal in which the oxygen octahedral experience this sort of antiferro distorted tilt over the plane which is perpendicular to the tetragonal axis as illustrated here in the left figure. In this case, we see how now there is a finite contribution by means of these indirect terms, which is the consequence of what is called rotostriction by which the tilts of this octahedra are directly coupled with uniform strain. Miguel, it's five minutes or so. Okay, thank you. So at this point, one would be tempted to analyze further these numbers. However, I must say that this analysis would be worthless. And the reason is that, okay, sorry, the reason is that the longitudinal and the transverse coefficients are ill-defined. These numbers are only, do not have a clear meaning. Only the shear ones that are shown here and in these three bottom rows are free from this problem, as I will explain in the next slide. The problem is that the bulk flex electric coefficients are only defined modulo a constant that can be written in terms of the dielectric susceptibility and the linear variation with the strain of a potential or an energy. And this is originated due to the necessity of removing the microscopic electric fields that are associated with phonons in the long-waded limit in the calculation of the flex electric tensor, which in the practice boils down to impose short circuit electrostatic boundary conditions in the calculation. This is really necessary because other ways we cannot even write the flex electric tensor as a tensor. Then the problem appears because when imposing this short circuit boundary conditions we need to assume a reference potential and the final result will depend on this election. This is originated because in the presence of a strain gradient deformation, the different energy levels of the system will experience a different tilt as a consequence of the deformation, which is something that I have illustrated here for a slap of a material in the presence of a strain gradient deformation linearly increasing in this direction. As you can see here I have illustrated the conduction band minimum and the volume band maximum and the average electrostatic potential. In our implementation we would have taken as the potential reference the electrostatic potential, but this is an arbitrary choice which we made just because of practical reasons, and one could obtain similar results or still valid results adopting one of these other two energy levels. So at the end the meaning of this additional undetermined term is what we need to add to our calculated flex electric tensor in order precisely to shift this potential energy reference, which is directly related to the theory of absolute deformation potential and demonstrates that we can also access this other physical property by means of our implementation in avenue. So now the reason why the shear flex electric coefficients are free from this ambiguity is that in our approach the shear coefficients are extracted from the atomic displacements associated to a transverse phonon mode. And a transverse phonon does not generate microscopic electric fields, so one does not need to worry about them. To conclude this presentation, let me say a few words about the surface contributions to the flex electric effect. To this end let's consider a sample of a centrosymmetric material with surface, a finite sample, and in this centrosymmetric material however the surface will be always piezoelectric because there the centrosymmetry is locally broken. Then when one applies a uniform strain on this slab of a material, the induced dipoles at the two opposite surfaces will compensate each other because they point in opposite directions and the result is that we do have a zero macroscopic effect in total. But what occurs for the case of flex electricity, for example, if we bent this slab as shown here, now the local strain at these two opposite surfaces is anti-symmetric and because of this the induced dipoles at the surface will point in the same direction and will contribute to the total flex electric response of the slab. Actually, today we know that the surface contributions can be as important as the bulk ones at any thickness of the sample. For example in this paper it was shown that the surface and the bulk contributions have an opposite sign and they cancel each other to the point that the configuration of the surface can even lead to invert the sign of the final total response to the deformation. Moreover, the surface terms have an interesting property they can be obtained as the linear variation with a uniform strain of the potential offset generated at the surface and the thing is that this quantity suffers from the same reference potential ambiguity that I have shown you previously for the bulk flex electric tensor. However, it enters with opposite sign which at the end leaves us with a total flex electric tensor which is the sum of the bulk contributions on the surface which is well defined as it should be. In case you are interested in learning more about how to incorporate the surface contributions to flex electricity in following our approach, take a look at the poster 130 by Matteo Springo. So to conclude, we can finally enjoy an efficient implementation of the calculation of the flex electric tensor. One can go to the Abenid web page, download the code and run the simulation. However, the take home message here is that the numbers that one would obtain in this way are not really meaningful. One always needs to add something to them in order to obtain a measurable physical observable. And this something can be advanced term, which provide direct access to the absolute deformation potentials of the material, or a surface contribution, which gives access to the total flex electric or voltage response to a deformation gradient. That's all. Thank you. Thank you very much. Thank you very much for this meager one man clapping. So, as it is very nice that Miguel has already introduced the poster 130 years we remember where people can ask obviously then in a more informal setting more questions about this. We have now had one question by Francis Kerman. See. Can you, can you say something front is it. Yeah, sure. Thank you. Thank you, Miguel, very, very much. I have one question. Are there any materials having fun of a singularity as flex electric and what are the effects then observable. Thank you very much. So, can you explain me what is a fun singularity please. Yeah, the fun of a singularity means once you hit the boundary of the of the Bruin zone with the Fermi surface so you get large amount of states, like when you have superconductivity in graphene. Okay, you're talking about metals. Okay, yeah, yeah, all the all the theory that I'm showing here is valid for for insulators. Okay, that's for insulators. Okay, so you have no metallic structure there. Okay, thank you very much that. Okay, so I think we can have a second question by say it to say anyone in it. Can you say something to see if you hear you. Can you hear me. Yes, we can hear you. Thank you very much for your nice talking. I'm going to understand the behavior of Felix electric tensor by increasing the layer. Is there have any special behavior for this or not increase the layer in two dimensional materials. Yeah, the layers thickness do you mean. Yes, exactly. Well, as I said the the bulk and the surface contribution tend to compensate each other because they have opposite signs. So both of them will increase with the thickness of the sample. What we what what we have observed well not directly with this implementation but but in previous calculation is that this compensation tends to to a final value. So it converges to a final value which is the total flex electric response of that to that kind of the formation. So, there, yes. Okay, so in some sorry in some materials like MOS to then the go from both to to the dimensional material be observed pierce electricity. Is it true for flex electricity or not. Thank you very much for your answer. Okay, when you call flex electricity is always in any crystal and no matter the the underlying crystal symmetry or not the dimension of the sample. But we observe when we go from 3d to 2d. Yes, in the flex electric tensor is a reduction of in the magnitude of the flex electric tensor. However, this doesn't mean that the flex electricity is not important into the materials it's just the opposite because in 2d materials. Okay, the flex electric tensor might be small, but the strain gradient the formation that this to the material can sustain without before fracturing is much large that what can occur in 3d. So that's the reason why normally one says that flex electricity is relevant as the size scale is reduced. Thank you. Okay, so thank you very much. And I'm sorry for the other people who have indicated that they have some questions but you can later and discuss with Miguel at this poster, which will come up later. Okay, so thank you very much again. And now we come to the last two talks of the session which are both about a topic which has gained more and more importance in recently years which is machine learning and neural networks and so on. So, if we can perhaps already put up the slides. Yeah, can you share your script perhaps. Hi everyone so you can hear me. Yes. So do you see my screen. Yes, yes. Very well. So, and the next talk is by, oh, I'm already have to say sorry for certainly pronouncing wrong your name. So it's by non nook. It is this right more or less. That's called I mean long actually just known that's enough. It is by longer from Columbia University. And she will tell us about training in neural network potentials including atomic forces while Taylor expansion. Okay, so please go ahead and as the other ones I will tell you so five minutes or so. Before the end of this. Please. Yeah. Hello everyone. Thanks for the kind introductions. Thanks to organizers for organizing the great total energy workshop. Thank you for having me as part of this and also thanks everyone for joining us today. So, my name is non art rate I am a research scientist in the Department of Chemical Engineering at Columbia University at part of Columbia Center for computational History. So, in this talk, I will discuss our recent development on the efficient training of neural network potentials by including atomic forces, why are Taylor expansion and application to water and attraction metals oxide. So this project is in collaboration with Dr. Cooper, Dr. Kessner from Stuttgart University Germany and Dr. Urban from Columbia University. So accurate simulations of large scale system is important, right. DFT methods provide accurate energy and forces, but to access real world complex system, efficient and accurate inter atomic potentials are needed. So in this talk, I will focus on machine learning models for large scale MD simulation. The general idea of machine learning potentials or MLP is to use machine learning methods to representings or for representing the potential energy surface or PES. In this short term here from, and I mean from accurate energy methods, for example, quantum mechanics database and the resultings of the MLP can then use or can then be used in MD or MC simulation. Our approach use artificial neural network for the interpolation, because they can be very accurate and can be trained on large data sets with hundreds of thousands of data points. The models only need atomic positions and are reactive and four dimensional, but it takes a bit of effort to construct and to validate them before we can then be used in the intended applications. We follow the general idea of the high dimensional neural network potential methods by Baylor and Parinello. In these methods neural networks are trained to predict atomic energies and the total energy is given by the sum of all atomic energies. In the input for each single neural network is a descriptions of the local environments of a single atoms or structural fingerprint. This structural fingerprint usually describes the local environment within the radius cut off of six to eight angstroms depending on materials. The neural network predicts the energies contribution of one atom and the total energies is the sum of all atomic energies. The sum is an example of a simple atomic energies with forward neural network. On the left, on the left hand side, this simple feed forward neural network has three layers input layer hidden layer and output layer with artificial neurons or nodes like represented here. So input layer hidden layer and output layer. In the applications, the input layer describe the local atomic structures and the output layer is atomic energies, the number of nodes in one or more hidden layers, determine the model complexities. The equations here on the right hand side corresponds to the same tree layer neural network. The atomic forces can also, I mean the atomic forces can be calculated from the analytical derivative. In this equation so each node or artificial neurons of the neural networks applies a nonlinear activation functions. This is an example of the hyperbolic tangent. So descriptors for local atomic environments important. The fingerprint or descriptor used by our neural network potentials is based on an expansion of a local or IDF and ADF in the basis of shape to shape polynomials. Here, five alpha, you can see here is the shape to shape polynomials of order alpha and c alpha is the corresponding expansion coefficient. The local atomic structure, as you can see here with the coordinate of all atoms within the cutoff or RC, which is described by two sets of coefficients. One for the ballins, RDF, and one for the one angle ADF. A second set of weights coefficients is used to capture the local atomic compositions or atom types, T or chemical species. Here, W or weight of atomic types or species, so we represent this as the element specific. Sometimes it's easier to look at equations than just cartoon that I just showed in the previous slides. Here are the equations of the shape to shape descriptors for the local chemical species or atom type T. The descriptors components are the expansion coefficients of the RDF and ADF. So the weights W, T, J are specific for atomic species. The descriptor of local structures is equivalent, but all of the weights of atomic types or species are equal to one. So far, we discussed machine learning potentials trained on energy only, but for MD simulations atomic forces are also important, right? So in this paper, we did a proposal and efficient approach for training neural network potentials, including atomic forces via Taylor expansion. Neural network potentials can efficiently be trained on total energies, but small accurate reference data sets often lead to NN potential with significant little force early or large force errors. So increasing the data set improved both the absolute values on the left here that you can see and on the directions on the right figure of the predicted forces. Here on the left, you can see that the tail of the error distribution disappear with when we increase the number of data points. And the predicted directions of the forces vector also improve with the data sets are increasing, especially for the small force vectors. In principle, neural network potentials can be trained on total energies and atomic forces. Energy training RE is a loss function that minimized during energy training. This function is contained the difference of the predicted and references energies. This is the additional loss function term for the force errors. Note that the forces acting on atom J is a negative gradient of the energy with respect to the post limit with the positions of atomic Js. So both can be combined and trained on energy and forces at the same time. Here, as you can see here, AE and AF, they are the relative importance of the energies and forces error. Conventional neural network training methods make use of the gradients of the loss function. Neural network potentials can be easily trained on total energies, which it requires only the first derivative of neural network potential energies. Force training with gradients require a second derivative of neural potentials. This makes forces training more computational demanding is very expensive and also required much more memory, which is can be really an issue for large neural network architectures. Additionally, we also looking at the radar cut off for the force evaluation, which is twice larger than for energy evaluation. The atomic energy depends on all atoms within the local atomic environment. So this means within the cut off distance RC. Sorry, from the central atom I, so however, the forces acting on atom I depends on all automations twice in this distance. So like a tool RC. So for further discussions on these methods, you can also see our previous publications in PRB 2012, where we also provide a duration tier. So this scale make a force training computationally very expensive for such at the condense phases. This is why we developed an efficient approximations of the force training approach. So we use Taylor expansions to estimate the energies of structures with small displays. So on the left here, we have only the energy, I mean, energy training only. So we would like to reproduce the energies in the gray curve here, having the reference data sample as a blue. And after use neural network for predict the energies, you can see that it looked like all neural training, fitting very well, but predict wrong ball length and also energies. But when we include forces information in the training set, we introduce more points more data points using Taylor expansion, and we predict our energies using neural prediction so we predict the energies curve that we the target energies accurately. In our approach, additional atomic structures are generated. We are random displacements of atoms in one of original structures. So the optimal magnitude of the displacement is system dependent. For example, in this case, we use a water cluster. So displacements of around 0.01 angstrom works quite well as you can see here in figure, but with the number additional structures in this plot that I have it here. The number, this is mean the number that we introduce to additional per original structures. The x axis, this is the maximum random atomic displacements, and this is a mean absolute error of atomic forces in percent. So the more additional structures that introduced by the tele expansions is look like the more accurate, but also depending on the maximum random displacements. So in this case, for water 0.01 angstrom show the best performance. So note that when it works, actually direct force training can be more accurate than the tele expansion. For small six water cluster system, direct forces training require eight times of the CPU time higher than training with just using our Taylor expansion methods. Like you can see here, the green and the blue, this is the Taylor expansion, but the black, this is the direct for trainings. But as I mentioned already for condense structures or period extractor direct force training for all force components is typically very challenging or not feasible. We also test this approach for a more challenging system. This is the system that we selected a lithium transfer metal oxide with a five chemical species for such a complex composition. It is challenging to compile really large reference for neural network potential fitting. So approximate force training can be reduced the number of reference structures that is required to construct such a complicated neural potentials for these five chemical species. So as you can see in the left figures, the optimal atomic displacements for the tele expansions is around two or three times larger than for the water that I showed in the previous slide, which just makes sense because of the atomic distance or bond distance around three times larger than water like OH or OO in the structure of water. So it's five minutes or so. Thank you. Yeah. Approximations or approximate process training, remove the high error tails from the error distributions. This is expected to improve the stability of the potentials or neural potentials for MD simulations. Here to test the improvement. So we perform around one nanosecond long MD simulations. This test, we use a small number of reference data set 800 reference structures. Actually, the 800 reference structure are not sufficient to obtain a good neural potentials by using energy training only. So the MD simulations are not stable. And you can see here on the red and yellow MD trajectories. But training with tele expansions methods with the same data set. So you can see on the green trajectories. It looked like our MD simulations is more stable, right. So this premier potential network potentials you see need for further refinements for the real applications. This is just to show as an example. The tailor expansion methods have been implemented in our opened resource called the AINET package. It will be made available with the documentation in the next AINET release. Just to let you know that AINET is free to download from this website and also provides a library that you can really use the interface with standard software for MD and MC simulations, for example, ASE, Tinker or Lamp. In summary, so I hope I could demonstrate that using the first training approach, the size of the reference data set can be smaller since more pieces of information per structures are used in the training set. So this means less actual, for example, DFT calculations are needed. It's also easier to construct preliminary neural potentials for MD simulations. Training on small data sets is already a show for the cases that I show that already produced neural potentials that are stable in MD simulation. So finally, I would like to acknowledge my collaborators and facilities. Many thanks to the current team members who are now working on machine learning models for energy materials. So Shen Wang and How You Go. Thanks also to the current collaborators on other machine learning projects. So April Cooper from Stuttgart. Also, thanks Dr. Shen, Dr. Urban from Columbia University and Dr. Hybersons from Brookhaven National Lab for the discussion. Michael Shen and Professor Tom Magland, Stanford also helped us to implement AINET and LAMP for MD simulations. Also, thanks Professor Kaisenus, also for the first training by teleexpansion these methods. So thanks again to the organizers. Thank you all for your attention. If I still have time, I'm happy to take questions and comments. Okay, thank you very much. We have a very nice talk. Yes, we have some time for some questions. I think we will start with Amir Talibi. You should be able to speak now. Can you say something please? Amir, yes. Please say something. We cannot hear you right now. Okay, apparently you have problems with the microphone then perhaps I can just read the question which he has written. How much potential did you use to get your data using the DFT? So for water, we use a between leaf and some kind of like a long rain and charges. So April Cooper is produced, the reference data for the water. And for the five chemical species, lithium-tentium metal oxide, the cathode materials, we're using the scans functional. This is the new method that we use it. Okay, thank you very much. And then there is Khan and Dhaka who has some questions. Let me see. Can you say something please? Just to see. Hello, can you hear me? Yes, we can hear you. Please ask your question. Yeah, so I actually have two questions. So the first question is the descriptors that you're using. So what I see is that works well for three-dimensional bulk structures. So I wonder, will it generalize to two-dimensional materials or it never just have been tested on two-dimensional materials? Actually, there is no, I mean, from general, there is no weight or no problem to prevent to work with the 2D materials. So we have done so far mainly for molecules like a water or gas phase, right? It's also like a smaller dimensions and condense matter like a 3D like a creating lithium-tentium metals. On 2D, I believe that you maybe have only, I mean, you have to think of like in case of the maybe less number of directions, for example, the angular that you can also put it in the way. So in my opinion is that our descriptors should be general for any kind of systems, including the 2D materials. I haven't done any, I have no experience on the 2D material myself. I am not sure, but maybe you could try. Okay, thank you so much. So like, if I can ask another question. Okay, okay, so my second question is about how hard or like if you, like, what are the basic guidelines that you take when you are trying to assemble the data points for creating your training data set? This is very good question and always got these questions. So the selection is really depending on your intended applications, right? You have to make sure that you selected your reference data can really represent our sampling that cover the energy range and forces to cover your potential energy surface that you can really use in MD or MC simulations. So it's depending, you can have a small reference data points or larger different data points, it's different depending on the complexities of your materials and also depending on the applications. Okay, thank you very much. Thank you. And perhaps we can close this with one very quick question for myself, which is, so I would like to know when you learn forces and energies, is it kind of assured that one is always the exact derivative of the other? Because otherwise the problem might be that you get the drifts in energy or so when you do molecular dynamics. So, okay, you're saying is that if I train that if I have accurate energies, maybe the forces is not accurate. No, I mean, since you learn both, you learn both not perfectly exactly, but to very good approximation as you've shown. But I mean, is one really always the exact derivative of the other? For this case, it says not right, as I can, as I show you our methods here, this is just to estimate the gradient of your structure that you are increasing as a Taylor expansion. This is to introduce the gradients as easily this is kind of still the energies but you estimate how the forces should be when we start to introduce such a small atomic displacement on the structure. Okay, I think we would have to discuss this a bit more in length. Okay, so thank you very much again for this very nice talk and this contribution. And I'm sure that you're also available to all participants if they want to ask more questions because I see we have many, many questions which could be. Any emails I'm happy to to discuss and respond all questions. Thanks. Very good. Thank you very much. Okay, so we come to the last talk of this session. It's again about machine learning. The speaker is Graeme from IBM Research in in Zurich, and it's about machine learning physics for dense hydrogen. Okay, please go ahead and I will interrupt you five minutes or so before the end. Thank you very much. Thank you very much for the for having me here because this is very special for me because the total energy was my first conference that I attended in my life when I was a first year PhD student in CISA 10 years ago so I'm very happy that they have the possibility to present today this work. My daily work in IBM Zurich is quantum computing but here I want to show you some progress on a physical system that I studied since my PhD in the dense hydrogen using a new technique machine learning. So let's go to the outline. First of all, I hope that you can see my screen, right? Yes, yes, we can see. Okay. So I want to keep this talk quite simple and high level. So I want to provide you a brief overview of why dense hydrogen is still studied today. Then I want to show you this machine learning power simulation that are conceptually the same that we just saw in the previous talk and then see of this enabola some new computation discovery on this long study system. The motivation to study dense hydrogen today are essentially three. So dense hydrogen is still a fundamental interest in condensed matter. For example, Ginsburg placed understanding and synthesize hydrogen mentalization as a top three problem to be solved in this century. So above dark matter, for example. But this system is very complicated because it lies at the interface of lattice model and materials modeling. So you have still strong correlation, but also the complication arising from a real material. So for example, nuclear quantum effects. As a result, you have a lot of phase transition, which are structural but also electronic. And this makes the problem very hard also from the theory. Then technological application for hydrogen are also important. We know that last year finally a room temperature superconductor has been discovered. This is indeed a dense hydrogen system at mega bar pressure, which are exactly the same that I'm going to talk in this study. And indeed, is also this mega bar pressure pressure that are happening inside the giant planets like Jupiter and Saturn. Therefore, the physics of the hydrogen is deeply connected with planetary science. So let me walk you through the most recent phase diagram and this situation is rapidly evolving because as I say this is still ongoing experimentally theoretical effort on the system. So let's say relative small pressure this the hydrogen is described as a system of molecules. So there are molecules fluids and in a low temperature the solid which is the blue region. Then upon compression has been predicted almost one century ago that hydrogen should metallize so we have metal we should have metallic hydrogen. A lot of experimental effort has been put forward to see a metallic hydrogen, but until three and I mean the molecule is still insulating until 300 G Pascal which is this blue to white boundary on them on the on the right. What has been instead synthesized or at least observed is a metallic hydrogen in the liquid phase in this, although this black, this black point, although the position and the mechanism of metallization is still debated. Another important feature of the face diagram of the inside regime is the entramental line so if you place yourself at 800 Kelvin or this type of pressure. If you compress the solid you will you have again the liquid and as a matter of fact the region in which the molecular fluid the atomic fluid and the solids intersects or meet is is still is still unknown. So experiment cannot reach this pressure at this temperature and also theoretical methods are very, very challenging to be, I mean prediction dream of a theoretical method are not so simple. Indeed the path to metallization was in the in the molecular solid is still unknown and this is why the problem is interesting. Indeed what I show you here as a blue region is indeed a collection of a lot of face boundary in the solid which are we know that the face boundary experimentally but the crystalline structure are difficult to be to be understood experimentally because of weak x-ray scattering and there are a lot of competing experiment that claims to observe hydrogen metallization a different position in the in the face diagram as a result there is still a lot of questions going on about how and where hydrogen will metallize, realizing the dream of Wigner and Hamilton. Finally, the connection with planetary sciences is clear so the adiabats of Jupiter and Saturn cross the metallization the proposed metallization transition in the fluid, namely there exists a metal to insulation transition in this planet. And the character of this metallization is really important because the same planetary modelers desperately need a first order transition somewhere to justify a posteriori day three layers model for this planet. And the metallization of hydrogen has been always postulated as a boundary between the outer and the inner liquid hydrogen layers. And moreover, the metallization should trigger several, several, several phenomena for example Ilium rain which is a mechanism postulated to explain the anomalous luminosity of Saturn. So Ilium the mix in, should the mix in metallic hydrogen creating droplets that falls inside the planets, eating again the planets and explaining this excess energy that we see and makes Saturn looks two billion years younger compared to what is true. So the physical hydrogen is deeply connected with planetary science in several way. And this hope that justify the fact that we are only looking in this talk at the liquid liquid phase transition. Indeed, this transition is still debated and there are experimental observation by different groups that are scattered on 200 gigapascal range or two megabar pressure range. So, of this experiment differ, so they use different proxy to signal a first order transition for example reflectivity or eating curves, and they also use physically different methods to compress hydrogen. But if we assume that there is one first order transition, this should not matter because only one lines of experiment should be correct and the other should have some systematic error due to the, to the experimental process. So therefore, a lot of, a lot of effort as we put forward experiment from a theoretical side to try to assist the experiment using first principle molecular dynamics. Unfortunately, this system is very challenging because as you see, if I change the electronic solver also within DFT, if I use PB or of underwalls functional, my prediction for the metallization differ by order of 100 gigapascal. But at least what all the simulation method agree is on the character of this transition which is first order at least below 200, sorry, 2000 Kelvin. So, while the position is still debated, at least the theory agree on the on the existence of a first order liquid liquid transition. So in this talk instead that will provide up evidence for the opposite, without even claiming that the, let's say the problem was in the electronic solver. So the, what we saw is indeed that the problem are in the in the first principle molecular dynamics setup itself. Indeed, what do you do when you want to probe this kind of phase transition you perform first principle like a dynamics with PB, for example, but it's also true that there is an ambiguous distinction between a liquid and a defective solid if the size is small. And it's also true that the second long simulation which are standard for the normal DFT molecular dynamics are also short to overcome nucleation barrier time scale. And indeed, what we see a different temperature pressures that if you look carefully or simulation and you average over a window of zero to 0.2 picosecond the position of the of the hydrogen what you see is that they operate some fluctuation around crystal structure. Most importantly, this spore solidification happens in the in what you are supposed to simulate a liquid and happens very close to the putative first order and liquid liquid transition predicted by a normal standard molecular dynamics method. So this custom doubts about the the validity itself of the existing of this first order liquid transition also because the these points and this line lie very close to the melting line which is anomalous as I remember so is re-entrant. So, it's time to perform some systematic study and look about on the system with a different different middle different perspective. So what can we do more than that we know that unfortunately there is this this separation of size and time scale between first principle simulation and empirical models. So quantum Monte Carlo and DFT can tackle order of 100 electrons and if you equip them with the molecular dynamics integrator you can reach something like picosecond time scale. While empirical model can for sure look at the tens of 1000 particles for nanosecond but you lose all the first principle of tuberculosis so you lose the chemistry. Now, since you remember the title of my talk, I will tell you that machine learning can be a tool to bridge these two worlds. And I will make use of all the nice introduction of them of the previous the previous talk skipping all the details and just give you the overview of the of the methods. So the dissecting is the supervised machine learning. So as much as in a normal supervised machine learning task you learn a mapping between an image and a number, a level for example, here, and we use the framework of Baylor and Parinello that has been introduced several years ago. The machine learning is asking the machine, the artificial intelligence to learn the mapping between a structure and its energy, effectively bypassing the need to solve the Schrodinger equation at each ionic iteration in science. And if you plug DFT type of data set with PB, the neural network should learn exactly PB description of the system if you plug a different solver the neural network should learn higher so higher higher higher level of the theory. So we use the N2P2 code, which is a tool to perform this kind of learning. We construct several generation of the machine learning potential and this is open source in the repository of Binshin Cheng with the first name of the paper. She was a PhD student in Michele Ciorioti group, and now she is in Cambridge. So everything is available for your testing also. And so we perform this machine learning potential training. This can be then interface with lamps to run this large scale simulation. We perform also benchmarks, of course, but in the interest of time we will skip this part, although it's very important. It's five minutes more or less. Okay, thank you. And then let's jump to the result. So first I say again, so now we have this machine learning potential that allows us to run a thousand of atom simulation for 400 picosecond long simulation and then we can scan the phase diagram with a much finer mesh compared to what is possible before. So here we plot the average order parameter, which is the molecular fraction, namely blue area means it is molecular fluid or solid, while red area is atomic. So first we rediscover the reentral melting line of hydrogen. We also rediscover the candidate crystal structure for the system. Again, remember without putting any human intervention in this so just by cooling the liquid. And then we focus on the liquid transition we see that there is a lack of sharp transition between the blue and the red area. So this is already an indication that maybe there is not a clear physical transition and we substantiate this claim running some for the first time some ab initio thermodynamic model on the system. And since the simulation now are quite inexpensive, we can even run meta dynamics to artificially enhance enhance the molecular dissociation. So we can compare our simulation with a regular solution model and where delta G is the chemical potential differ between the atomic and the molecular, so the molecular fraction is the parameter. Omega is the non-ideal entropy of mixing. So we can run simulation, meta dynamic simulation at different x, p and t obtained by fit delta G and omega. Now this is going a little bit technical, but just follow me. So after this, we fit again these trends to only to get these two curves. So the consistent lines, where is well delta G equals zero, which is the temperature at which the two fluids are equally stable. And the phase separation line, which is the temperature below which then there is a phase separation between two. So thanks to this machinery, we can draw this line in the phase diagram. And we see that the intersection is quite a low temperature and an ally pressure this, this orange star. So certainly is not around 2000 Kelvin like he was predicted with normal ab initio molecular dynamics. So if now we zoom a little bit more. There are three lines again. So one is very important is the melting line, the purple, which has been rediscovered by the machine learning potential. Then there are these other two lines that meet at the critical point. The critical point is lying really close to the melting line. And this allows us also to provide an explanation and reconcile the experiment because if you assume now that the critical point is deep in the solid. And there are these medium line that emanates from it. So even that experimental setup trace different quantities can be also understood or understandable that different quantities as seeing us this discontinuities a different place in the phase data. And we see that they all nicely extrapolate to do this critical point. One more, a more methodological aspect is that if we wanted to, if we wanted to perform the same kind of computational experiment with normal and molecular dynamic FET, we will have used several hundreds million of CPU years. It would be impossible. So this, this show you that this machine learning power strategy can really enable us to do this kind of very fine and, and, and, and say accurate molecular computational study. So just just to share say that there have been related work published some week later. So one again on on hydrogen with machine learning. And another study by Roberto Carr and Benedetti on on on a conceptually similar system which is the study of a liquid liquid transition button water this time using the same the same concept. So now the outlook. Yeah, so there are a lot of things that one can be done something from here so we can start looking at the solid. So really scan more and better the face diagram, you can add more elements for example, hydrogen, sorry, sorry, you can directly probe the rain, maybe, or also put other element carbon, or you can also improve the accuracy, because I believe that this kind of study. Also a great opportunity for normal get your own or electronic strategy solver because they make this system this this method really inexpensive. So for example, you can also use better or more accurate exchange correlation functional or use quantum Monte Carlo. And we know that for hydrogen, while then we believe that the result are quantity, the quality will be correct because we also run difference in exchange correlation function. If you really want to have a quantity of understanding of the face of the face diagram you really have to have the best electronic solver in the market to train your neural network with. And with this, thank you for your attention and I really want to thank all my collaborators which are essential in this in the study so from the planetary science side with the lead and run a redner from the machine learning side which is the core of this talk, I've been chincheng Chris picker Micaela chariotti and I want to talk as well Sandra sorella who was my PhD supervisor and that introduced me to the physics of hydrogen some years ago. So thank you very much and I hope I've been in time, more or less. Well, it was a very nice talk me it's nice to see what one can do now that these machine learning methods are out of their infancy. So we have a lot of questions and little time as we are already far beyond the end of everything but I think we can have a quick question perhaps the one by Nathan delman. Wait a minute. So Nathan, can you talk and can you hear me now. Yes, we can hear you yes. Okay, thank you. Thank you very much the animal for the, for the great talk and for showing how how strong these new network applications can be. So I just have two small questions. Firstly, considering that you're sampling in such a under such extreme conditions. It's always possible that the neural network can move outside of the region where it has been originally trained. Could you elaborate a little bit on the tests that you did to ensure that there is no extrapolation going on and that the model is robust. Yeah, indeed. At the beginning we train only with with liquid structure. But, but it was nice that even by training on the liquid structure we were able to see very well the reenter melting line so namely I read in the liquid you can learn the the inter the inter atomic environment of hydrogen. Then I'm going more into details on your question. Yes, we did several generations this machine learning potential including including more configuration where we need it was needed. For example, along the, let's say the, the believed liquid liquid transition line. And then we perform systematic test also against the, let's say, normal DFT simulation on size, which are possible to be to be explored. So yes, what you say is certainly true and we take our precautions. Okay, so you mean that you took that you recalculated some of the predicted calculations then in DFT. Yeah, so we were really able, for example, we were really able to, to, to start from already published configuration and rediscover all the molecular dynamic run. With the molecular machine learning potential, you can really see that the purple line falls exactly on the green line which is a trajectory that we got from a previously published work. So this is like an amazing test, I would say. That's nice, that seems very robust. Then my second question was. Sorry Nathan, but I think we really have to interrupt you because you're really very far beyond any time allocated to this. So I'm sure you can get in contact with well more and ask the question and the same is true for all the other questions which have been around here. So thank you again very much for this talk and for all the talks. Very nice session this afternoon. And yeah, and perhaps Nicola, I don't know how you want to. Well, I think we, we have, we deserve a 10 minute break and then we can go to the zoom rooms for the poster session so I suggested we start in 10 minutes. So and that would be more interactive so I guess you see you have all the links in your emails you have received so see you there and see you tomorrow at the same time here for the second day of this workshop so thank you very much and see you around. Goodbye everyone. Bye bye.