 fathers of our electronic structure community, as we know it today, not just for the tools because it's one of the founders of the Quantum Express distribution code that is something that pretty much many of us use on every day's research, but also in terms of theoretical frameworks, it has been developing a number of them, starting from linear response, for instance, theory on phonons within density function theory and then moving closer to excitations, also methods like the Turbolanthos approach in time-dependent density function theory, then applied to GW and the beta-salpeter, where basically you avoid any spectral representation of the Hamiltonian and some overstates and so on, and lately also some reformulation of thermal transport and ionic systems and so on. So overall it's a real pleasure to have Stefano here and he's just great also for this course of play. Stefano, the stage is yours. That's fine. I'll use it. I actually don't need it because I usually speak too loud without any microphone and people keep complaining. So not sure I was very happy to accept the invitation, of course I'm honored, but one usually asks the older guy in the audience to open the school and that's why I was asked. Actually, asking the oldest person in the audience has some risks because old man sometimes needs physiotherapy and that's why I missed the first lecture. Now I'm in good shape and I have been properly manipulated and I'm here for the whole 40 minutes with you to talk about, well, not material science, not even too broad, not even computer simulations in material science, too broad as well, just a very, very small angle of some technical questions and some general ideas and introduction. And I will finish hopefully the last 10 minutes or so of my talk by introducing techniques within density functional theory that allow you to compute spectral properties in materials that are due to lattice fluctuations, to lattice vibrations, which are not quite what you would be learning this week, but in a sense it's preliminary. So infrared and Raman spectroscopy in materials are mostly describable in the adiabatic approximation where you can pretend that the electrons stay in their ground state while the nuclei vibrate and that's why you can address those excitations in the adiabatic approximation where you pretend that the electrons stay in their ground state so that you can deal with all of those phenomena within strictly ground state density functional theory. I will introduce a few general concepts of perturbation theory that may or may not turn out to be useful for you in understanding what will be the main core of this course that is excitations due not to nuclear and lattice vibrations, but to charge fluctuations that break the adiabatic approximation and then you will see what it is about in the rest of the week. So every experience of ours of man is co-located in space and time, right? And when you ask yourself how to study a phenomenon, how to simulate a process, you have to ask yourself first of all on which time and length scales those phenomena occur. For instance the tools that are appropriate to describe phenomena on the cosmological time and length scales most likely won't be appropriate to study molecular vibrations and vice versa. So I think it's a good idea to frame our mind onto the length and time scales that characterize the processes occurring in materials. And by and large you can think of phenomena that occur at the macroscopic scale that is the realm of classical mechanics where you can disregard all quantization phenomena and you can treat materials as continuous media and on the opposite side so those are phenomena that are directly accessible to our senses, right? Things that you can touch and you can watch and you can listen to. This is by and large the realm of 18th century, 19th century science and of much of modern engineering, not all of that but much of modern engineering. On the opposite, so the length scale there are millimeters, kilometers, something that you can you can see and count by yourself and the time scale will be fractions of seconds up to several seconds, hours or whatever. On the opposite side, on the opposite end of this length and time scale scale so you have phenomena occurring at the molecular scale which occur on times that are a fraction or a small multiple of a picosecond. The picosecond is a millionth of a millionth of a second that is the typical vibrational time scale of molecules and and solids and that occur on the length scale that is a fraction or a small multiple of a nanometer of a billions of a meter. Those are phenomena that are affected dramatically by quantum effects and in between there is some kind of terrain cognitive so we know by now rather well how to to deal with phenomena with quantum phenomena at the nanoscale. We know since many decades or a few centuries how to deal with macroscopic phenomena better and better and in between the the regime where the quantum world reaches and matches the classical one is something that we still do not know exactly how to deal with. It's the it's the realm of the coherence where the coherence that is typical of quantum phenomena is lost by the interaction with the classical world. It's the mysterious world of quantum measurement and and all the like. Of course different length and time scales not surprisingly require different theoretical tools to to address and we go from the macroscopic scale that is the scale of classical electromagnetism of classical thermodynamics and finite elements down to two methods that are characterized by a high degree of stochasticity so that you need to account of four phenomena that occurs at the time scale that is much longer than that of the elementary molecular events that occur at the nanoscale but much shorter than the macroscopic scales. So this is the realm of kinetic methods kinetic Monte Carlo methods. Going down in time and length scales you have a classical molecular dynamics where you disregard the quantum nature of the interaction between atoms and molecules and you account for the motion of the nuclei by simply solving Newton's equations of motion and going down we arrive at the realm to the realm of electronic structure methods that are the topic of today's and this week's course. Of course size, size, length and time matter they go with different methods but different methods have different numerical resolutions and different predictive power according to the approximations that have to be implemented in them and according to the numerical complexity that you have to face in order to solve the equations of those methods. So going in order of increasing usually when you increase the accuracy so you increase the resolution you are forced to go what I meant is when you have very large systems you have to adopt methods that are necessarily not as accurate as you would use for small systems so in a sense high accuracy go with small systems and large systems go with low accuracy and going in order of increasing accuracy and decreasing system size you go from classical empirical methods such as pair potentials force fields and shell methods where you totally disregard the quantum nature of the chemical bonds then you can introduce some degree of quantum behavior using semi empirical methods which are however based on model quantum models such as tied binding or embedded atom methods and then we go in order of increasing accuracy to quantum self consistent methods such as Hartree-Fock that you have been that you just learnt about from from Andrea or density functional theory that I will address in the next half an hour or so. Hartree-Fock is an approximate method and density functional theory in principle is not in principle it is a rigorous theory however its implementation requires severe approximation which necessarily limit the accuracy and scope of applicability of density functional theory so in order to increase the accuracy you resort you have to resort to methods where the wave function nature of quantum states is taken explicitly into account either directly using wave functions such as in quantum chemistry method or using this representation in terms of Green's functions that will be the main topic of this week. So self consistent methods are implemented in quantum espresso and these Green's functions based quantum methods are implemented in Yambo which we will learn how to use this week. So we are all here to learn how to do ab initio calculations one here's time and again over and over again this and that computation has been performed from first principles or ab initio. What does this expression technically mean? What we mean actually chemists and physicists give a slightly different meaning to this expression what we physicists and more and more chemists now mean by ab initio methods are methods that allow to simulate the properties of materials using the basic laws of nature that is the Schrodinger equation and Maxwell equations so classical electrodynamics and the chemical composition as the sole input ingredients which means that in principle you do not use any information from experiment or as little information as possible from experiment. Why? This is a good idea not because of narcissism many of us or scientists have a narcissistic attitude many of us so we want to show how clever we are how good we are to solve difficult problems and we do things ab initio we can close our eyes we don't ask any help to anybody in the laboratory and still we are able to predict so this is this is so to some extent but this is not the real motivation the real motivation is that the predictivity goes with lack of bias so when you have some empirical methods the method and the model that you build is biased by the ingredients that you put into the model and you don't know a priori how the model that you build on some specific using the knowledge on some specific system in some specific physical chemical conditions how that model would apply for systems with a different chemical composition so even with the same chemical composition at very different chemical physical conditions for instance in a different bonding environment or a very different pressure of temperature so ab initio methods may be wrong they are not necessarily good you have to learn how to tell if they are good and if they are not but even if they are not they are wrong in an unbiased way so once you once you get acquainted with the accuracy how to estimate the accuracy of those methods you can be confident and not totally sure but increasingly confident that a calculation that you do for a system that nobody has ever studied you learn how to appraise more or less the accuracy that you can expect from the experience on on systems that are different from the one that you are considering when of course those methods are expensive dft is expensive and many body perturbation theory is much more expensive than dft so we are we are we have this narcissistic attitude but narcissism has to be justified that has to be hidden because it is not very very good to show that you are narcissistic but when when you are and you have to justify you have to justify it and the justification is that you don't want to tackle a problem you don't want to to do whatever just because your supervisor wants you to do that you have to be sure that the problem that that you are given is worth your time and and the problem is worth your time if if any other simpler and cheaper approximations wouldn't apply and if the approximations that you necessarily have to do have a good chance to give a result you can trust i often hear at conferences people of your age and invite them i would like to be your age but when when asked you are applying a method that is well known not to be good for the system that you are dealing with and one of them hears the answer well there is no better method this is not a good answer i mean if the method that you have is not good to solve your problem either you improve the method or you change the problem it is it is a shame to to waste your time and the time of your supervisor and the time and the money it takes to pay computers to apply a method that does not solve any problem because you don't trust the numbers so that you get out of your calculations and then we all saw our our tool is of course computers we are computational physicists or chemists and computers are not enough computers need the clever algorithms and clever algorithms in order to be devised that require you to understand the physics you have the math of the problem and you have to get as much as possible familiar with the math and and and to know what the the codes you are using really do codes have to be efficient and the and the results that you get have not to clash with common sense and scientific rigor you have always to know what the error is how can i estimate the error bar how how likely it is that the number that i get allows me to really answer the question the clever question the important question that i asked in the first place so abelitio simulations different communities i mean different things in our community the the holy grail the the the thing everybody would like to be able to solve is the time and the time-dependent schindler equation where that regulates the time evolution of a wave function that depends on a zillion of nuclear coordinates that i indicate here with capital letters and a zillion actually several zillions of lowercase coordinates that are electronic coordinates you have the kinetic energy of the nuclei the kinetic energy of the electrons and the potential energy that accounts for the interaction of the nuclei with the electrons and the electrons among themselves that dammed correlation effects that gives rise to correlation effects and nuclei among themselves the very first important approximation that almost invariably is done very important phenomena are due to the breakdown of this approximation but we will not address that background here not i think nor i think elsewhere in in this course maybe maybe yes some electron phonon interactions i don't know if you will cover it but basically because the nuclei are much heavier than electrons you can in a first approximation neglect the kinetic energy of the nuclei with respect to the kinetic energy of the electrons and if you do so you basically break the time-dependent schindler equation into a set of two coupled equations the first one is a time-independent schindler equation for the electrons that depend parametrically on the nuclei parametrically means that these hamiltonian there the hamilton the electronic hamiltonian does not contain any differential term in the nuclear coordinates so for every set of nuclear coordinates you have a well-defined electronic hamiltonian electronic wave function that depends parametrically on the nuclear positions because the hamiltonian depends parametrically on the nuclear position the eigenvalues of the hamiltonian depend on the nuclear position depend on these are indexed and it is exactly this dependence that gives rise to the interatomic forces to the extent that because the nuclei are heavy you can to a good approximation many times you can deal with them classically and solve classical newton's equations of motion for the for the nuclei that say that the acceleration of of each nucleus is proportional to the force acting on the nucleus and the force is nothing but the derivative of the energy that depends parametrically on nuclear coordinates with respect to the nuclear coordinates so in principle you have to solve this time-independent Schrodinger equation for the electron salon which is much simpler than the previous equation because you have disposed of the nuclear degrees of freedom but still is very complex and the the region the reason of this complexity is the existence of this electron-electron interaction that is such that the eigenfunctions of this Schrodinger equation cannot be written as a product or even anti-symmetrize product for that matter of one particle equation so you cannot solve this equation by solving each electron at a time which is what you do in Hartree-Fock. In Hartree-Fock you can you can attach a molecular orbital in one electron state to to each electron and it is much easier to solve. In density function theory technically you do the same thing conceptually is very different from Hartree-Fock but technically what you do is to replace the effect of this of this electron-electron interaction you replace it with an effective one electron potential which however as in Hartree-Fock depends on the solution itself how it depends on the solution through the orbitals because the system now is non-interacting you have no longer an electron-electron interaction but only an external potential then you can solve the Schrodinger equation one electron at a time so that you can attach molecular orbitals to each individual electron and the sum of the square moduli of those electron of those molecular orbitals is the so-called electron charge density distribution upon which the external potential depends so that you can you have to solve this equation self-consistently because the wave functions depend on the potential but the potential depends on the density but the density depends on the wave functions so that you have to solve them self-consistently. I will skip the technical part of let me so basically in density functional theory what you have that is that this external this potential that depends on the density is the external potential that is given by the system it is the potential exerted by the nuclei on the electrons by the nuclei or by some external field plus the Hartree potential that is the electrostatic potential generated by the ground state charge density distribution rho over plus the so-called exchange correlation potential that depends on the density and on the position in space. When Andrea before was speaking about Feynman and the correlations he forgot probably I don't know if on purpose or or for what reason to give Feynman's definition of correlation energy. Andrea you know what the Feynman's definition it's at page 33 or whatever of his textbook on statistical mechanics you go to the electron gas and then he speaks of correlation energy in that occasion and there is a note where he he calls the correlation energy not ec it calls it es why s it is the stupidity energy it is the name that we give to what we cannot compute Feynman's exit not me okay so DFD gives you a recipe to compute the stupidity energy the stupidity potential once you know the density and this gives you a practical way of iterating those one particle shedding equation as I tried to explain a while ago. So there are many different recipes for computing the stupidity energy as a functional of the density the density functional theory owes its name to the fact that it is based on a functional what is a functional it is a function whose argument is itself a function right it is a recipe to compute a number out of a function rho of r is a function of space point by point a function of of three argument x y z and the exchange correlation energy is a functional of the density in the sense that given a functional form of the density there are recipes to compute to compute the stupidity energy there are very many different recipes that are available the only recipe that was a viable till for the first 20 years of existence of density function theory was the local density approximation that chemists considered with despise so they were against density functional theory until when better approximations to the local density approximation were devised to the extent that the inventor of a density functional theory Walter Kohn who was no chemist himself got a Nobel Prize of chemistry for chemistry in 98 or 99 at the end of of the past century and that was possible because at some point the development of very sophisticated functionals much more accurate than the local density approximation convinced the chemistry community that the dft is indeed a viable method to compute molecular properties i will not go into any of those of those approximations basically dft amounts five more dft amounts to solving the one body schoeniger equation and in order to do that what you have to do is to expand your molecular orbitals into a basic set then the coefficients of this expansion become the functional unknowns of your of your system by by computing the extremum of the the dft energy with respect to those molecular orbitals imposing the imposing the orthonormality condition imposing the fact that those molecular orbitals have to be orthonormal with respect to each other you end up with a generalized optimization problem that you can solve directly and have erased what you would do to solve it directly with general optimization algorithms what most people would do is to solve instead a self-consistent set of schoeniger equation that have become because of the expansion in this basic set has been cast into an algebraic again value problem where the Hamiltonian to be diagonalized is a matrix is a matrix of finite dimension that depends non-linearly upon its solutions so the solutions is this array of complex numbers and the density depends on that array of complex numbers and because the Hamiltonian because the potential depends on the density the Hamiltonian depends on the density and hence on the eigen functions so that this Hamiltonian to be diagonalized depends on its own solutions we call this kind of schoeniger equation a self-consistent schoeniger equation mathematicians would call it the same thing for a mathematician is a non-linear schoeniger equation it is non-linear because this here you this Hamiltonian depends on c so the the left hand side of this equation is non-linear if you multiply c by a number the left hand side is not multiplied by that number but becomes something and very different so when you look into the mathematical the mathematics literature when they say non-linear schoeniger equation what they mean is what we would mean by a self-consistent equation so you want your basis set to be complete because the Hilbert space of one particle orbitals has an infinite dimension and you want a finite reduction of that infinite dimensional space to be as accurate as possible you want the matrix elements of the Hamiltonian to be easy to compute and or which is equivalent the product of the Hamiltonian times in orbital to be easily calculated on the fly Hartree and exchange correlation potentials have to be easy to represent and calculate and if possible if you manage to have an orthonormal basis set that would be I would simplify your your your life plain ways are such a basic set with which you can easily deal with periodic systems actually plain ways where devised in the first place to deal with periodic crystals but you can you can deal with non-periodic systems such as even a molecule by mimicking it with a periodic array of non-periodic systems so if you have a molecule here this is a sketch of a water molecule in color and you can pretend that the properties of the molecule are the same as the properties of a molecular crystal which you pretend but if the lattice spacing of the molecular crystal is large enough actually this is the case indeed if the distance between neighboring molecules is much larger than the dimension of the molecules than periodicity does not harm your results periodicity helps the computation so this so-called supercell method that is the periodical repetition of non-periodic units is a very important tool in our business so plain waves have many many desirable properties and I will not comment upon upon this nor this they have the good property that completeness is very easy to check because it only depends on a parameter just one parameter that is the maximum kinetic energy that is representable with that with those plain waves you have just one parameter upon which the whole numerical accuracy of your problem depends then matrix elements are easy to compute densities are easy to compute they are orthonormal and so they have lots of very good properties the real great drawback is that they have uniform spatial resolution so the resolution of a plain wave is the space resolution is determined by the wavelength of the plain wave and the resolution is the same all over all over the all over the space this is not good because atoms have the of which molecules and materials are made have the nasty property of being held together by not held together but I would say that the electrons that form the chemical bond are prevented from collapse collapsing onto the nucleus by the core electrons so that are chemically inert but that have a space resolution that is much much much finer than the space resolution of valence electrons in fact a 1s core state has a space dimension that scales as the inverse of the atomic number of that of that number of so the the the the Bohr radius of the 1s state of uranium say is 100 the the Bohr radius of of of an hydrogen and the maximum kinetic energy that a 1s core state of of of uranium has is the square of z so it is 10 to the fourth the kinetic energy of an electron of of a hydrogen atom so you would need a huge number of plain waves just for the sake of describing electrons so that you don't care about because they don't form any chemical bonds so so they are they are just to prevent the valence electron of heavy elements to collapse into the nucleus so the the solution of this is this pseudo potential method on which both quantum espresso and yambo are based basically in in a nutshell what you the concept of pseudo potential is a potential that does not have any core states but that only has valence states but the valence states of the pseudo atom are as close as possible to the real valence states in particular they have to come to to coincide with the valence with the real valence states there where valence electrons form the chemical bond because I want the chemical bond of pseudo atoms to be the same as the chemical bond of of real atoms but because they it has not pseudo atoms do not have core states the valence states that have nodes in in reality because valence states have to be orthogonal to to core states the pseudo valence states have to be no less because they have nothing to be orthogonal to right so the recipe is very simple the the recipe to construct the pseudo potential is conceptually very simple you start from an atomic orbital here is the 3s atomic orbital of silicon and then you invent any wave function any orbital that coincides with the real wave function in the valence region you see here the dotted line and the continuous line are indistinguishable in the valence region and in the core region you just make it no less then you impose that the norm of the pseudo wave function is one it has to be the same as the norm of the of the real wave function and you simply invert the Schrodinger equation what you do so this equation there that orbital there satisfies the radial Schrodinger equation that says that the second derivative of the orbital that is the kinetic energy plus the potential times the the orbital is equal to the eigenvalue times the orbital so that you have that minus chi double prime over chi is equal to epsilon minus v so that you have that v is equal to chi double prime over chi minus epsilon very simple if you have a a wave function and you have fabricated the wave function with the desired properties to be as close as possible to the real wave function to have the good norm and to be no less once you have the wave function you have the potential you just inverted the Schrodinger equation and the ratio between the second derivative of the wave function and the wave function is equal to the potential modular constant that's the way pseudo potentials are constructed and you do the same for different orbitals so you did you did so for the 3s you know you do that for 3p for 3d and you have different pseudo potentials for different for different angular momentum so you have all the ingredients you have a functional that some high brow theorist has produced for you you have pseudo potentials that some nerdy scientist has fabricated for you and you have codes that some other nerdy people have worked on for the decades you have the chemical composition of of your system you just feed your computer code with the geometry with the chemical composition the positions of the atoms and the functional of your choice and you have to be good at choosing the right functional for the right accuracy and the right class of materials this is a very very complex and non-trivial endeavor and you solve the Schrodinger equation Schrodinger equation gives you eigenvalues and orbitals with which you can compute ground state properties right away you don't need any multi perturbation theory to do that and you can compute phonons and the response what happens here and the response functions that I was planning to talk about but I won't have the time to do so I will stop in one two minutes the takeaway the takeaway message is that with this framework you can compute all the properties that rigorously strictly all the properties that can be expressed in terms of local operators are operators that are diagonal in the coordinate representations so that the expectation value of a local operator can be local and one body operator can be expressed as integrals of the density that is given by dft and the representation of the operator in this way so you have the expectation value of an operator would be the integral of rho over times o over dr so all the local operators all the expectation values of local operators can be rigorously computed within dft it is common practice not rigorously founded but common practice that everybody would follow also to compute expectation values of no local operators such as or semi local operators such as momentum distributions for instance the momentum is the distribution of caverns they're not strictly expectation values you cannot express them in terms of anything like that you would need the derivatives and derivatives differential operators cannot be exactly represented in density function theory but everybody would do and close their eyes and hope and hope for the best you do this and you can compute all static response functions such as the electric constants and case electric constants elastic constants all of those can be computed including including those response functions that are responsible for lattice vibrations may seem unexpected that in order to compute vibrational frequencies you need the response functions actually it is not quite so because basically what you need in order to compute vibrational frequencies is the second derivative of the energy the ground state energy of a system that is given that is provided by dft so this can be computed by dft with respect to the positions of the nuclei but this is nothing but the derivative minus the derivative of the fourth acting so the second derivative is the first derivative of the first derivative right so d over dr is the fourth acting on atom r and d over dr prime what is it is the derivative of the fourth acting on atom r when you move atom r prime of equilibrium and what is this this is a response function this how much it changes the force on one atom when you perturb a little bit the position of another so if you have a general framework to compute static response functions and then see functional perturbation theory provides you with such a framework you can you can compute inter atomic force constants and from inter atomic force constants you can compute phonon dispersions and electron phonon couplings and superconductivities and thermal properties and lots of properties I won't have the time to talk about today so I prepared this last picture in assuming that I would have had the time to explain in some detail how to compute those vibrational frequencies that are the manifestation of lattice fluctuations I didn't really do that but those of you who are interested to know more would be most welcome to come to me for a private or a group chat and this week you will learn how to go beyond and to compute the excitation energies that do not to lattice vibrations that can be dealt with in the adiabatic approximation but with electron charge fluctuations that break the adiabatic approximation so all of what I would have talked about concerning lattice vibration and stuff is implemented in the quantum espresso suit of computer of of computer codes that you would likely use as a prerequisite for your many body perturbation theory calculations for this week as well and all of this is supported by the quantum espresso foundation and not for profit company that fosters the development of quantum espresso and the max European center of excellence for supercomputing applications max is is an acronym for materials at the exascale and that's it thank you very much those of you who are