 So they go for lunch and then when they are back, they start. OK, I started the streaming, so whenever you want. Yes, I think we can gradually start. I don't know, Yuri, what do you think? Yes, we can start. So welcome back to everybody. I hope you had a good weekend. Wait a minute that I have to stop the streaming in my head. So here, so today we're going to have a lecture and so on. And after in the in the later afternoon, we're going to have also a keynote lecture. So we're going to have three keynote lectures this week. So this morning we start with Dr. Yuri Timorov from the, let's say, long-term friend of the quantum espresso package talking about the time-dependent density fashion of perturbation theory. We'll try to collect questions from the streaming. So everybody that is following the streaming, please feel free to make questions on the chat. While here on Zoom, I mean, as usual, we wait at the end of the lecture and then with the reaction, we can we can leave also a participant interacting with Yuri at the end of the talk. So Yuri, stage is yours. Good luck. Thank you, Ivan, for the introduction. Hello to everybody. Okay, just give me a second. Okay, so today we're going to discuss about time-dependent density functional perturbation theory. I hope you enjoyed the first week of quantum espresso school and I hope you will enjoy also the second week and it will be as successful as the first week. Okay, to get started, let me remind why we are here. So we are here to learn the quantum espresso package. I would like to share with you this data, which is the number of citations of the two main quantum espresso papers shown below, published in 2009 and 2017. So you can see that the citation of quantum espresso is really increasing every year. And in particular, in 2020, we have essentially that every day, quantum espresso is cited in 10 papers. So for example, today 10 papers will cite quantum espresso. This gives us idea of really the fact that quantum espresso is a widely used electronic structure software. As you learned last week, quantum espresso is a suite of codes, open source codes for electronic structure calculations and materials modeling at the nano scale. It is based on density functional theory, plane waves and pseudo-potentials. And as I already mentioned before, this is the most widely used open source software for electronic structure calculations in materials. On this figure on the left, this is schema that describes the nature of quantum espresso package. There are different layers. For example, the highest layer, it's so-called property calculators. There are different codes. For example, the PW code that you learned during the first day of last week to solve the quantum equation and to obtain the ground state of your system. There is also another code called CP for Carparenella molecular dynamics. You will learn it this week on Wednesday. Last week, we learned about Nuget elastic band with Tonya, Pietro and others. There is another code called PW code for transport. There is a bunch of post-processing codes collected in the repository called PP and many others. More advanced codes in quantum espresso are phonon that we learned on Friday to compute phonon dispersions. Today, we will learn TDDFPT for spectroscopies like optical absorption, electron energy loss and magnetic excitations. There is also GWW code, which is, as you can guess, the GWW method. EPW code to compute electron phonon interactions. Also last week, we discussed briefly about HP code for computing hyperparameters and many, many more. And lower layers are more technical, which we are not going to discuss during this school. This is how the package is organized in terms of libraries and modules. Okay, so as I said, today, we will focus on TDDFPT. So what is this and why do we want to discuss TDDFPT? So maybe you've heard such a term as computational spectroscopy. So the idea is to go from the ground state to the excited state. Last week, as I said, we learned how to solve quantum equations to obtain the ground state of the system, how to compute band structures shown here for some material, projected density of states and many other fundamental properties of materials. But now we want to go further and we want also to study materials in their excited state because there are many things, we can do many things with just ground state DFT, but we can do many more things if we go further to the time-dependent, to describe the time-dependent phenomenon. So this is the band structure on the y-axis. We have energy in electron volts and the x-axis are this, the high symmetry directions in the brilliant zone. And L, gamma, x, U, K and gamma are the names of high symmetry points in the brilliant zone. And these spaghetti are electronic bands. So what we can see here, we have the band gap. This is the energy separating top of the valence bands and the bottom of the conduction bands. So this is, let's say we have our system, our material in the ground state and then all of a sudden we irradiate our material, for example, with light. So we send photons to our materials. What happens is that valence electrons, they absorb photons and make transitions from the valence band to the conduction bands shown with this red dashed arrows. So these transitions occur when the energy separation between valence conduction bands equals to the energy of the photon. So when there is a resonance. So there are different transitions. So to describe this, of course we can say we can use just standard DFT. We know energy valence bands, we know the energy conduction bands, we just take the energy differences and that's it. Well, this would be the simplest approximation of course. And as we will see in the rest of my talk, this is not satisfactory when you want to make a comparison with experimental spectra. So we need to do more advanced simulations. So to do that, we need to go to the time domain. So because so far it was all static, we didn't have time, it was all fixed in time, but now we need to introduce a time variable and extend DFT to the time domain. So this will be the topic of today. This is the outline of the morning lecture. First, we will discuss the basics of time-dependent density functional theory, in particular two Rundgross theorems. Then we will discuss linear response TDDFT or it's also called TDDFPT where letter P stands for perturbation. I will explain later why we have this acronym. And we will discuss four methods, namely Dyson method, Steinheimer, Louisville-Langsos and Casida Davidson. And in the second part of my talk, we will discuss applications of TDDFT to describe optical absorption spectroscopy, electron and angelos and in elastic neutrons catering, which is for magnetic systems. All right, so let's start with time-dependent Schrodinger equation. I hope I explained why do we need to go to the time domain. So as it's written on the slide, like in the static case, we want to use a Born-Oppenheimer approximation, which tells us that electrons move in the field of clamp nuclei. So the mass of nuclei is much larger than the mass of electrons. So we can assume that nuclei are fixed and electrons move around of nuclei. And the evolution of the non-relativistic interacting many electron system is governed by the so-called time-dependent Schrodinger equation written on this slide. Last week we discussed time-independent Schrodinger equation. Now we go to the time-dependent extension. So in the first equation, we see that there is a wave function capital psi, which is the many-body electronic wave function. It depends not only on the position of all electrons labeled by R i, but also it depends on time. There is also the operator H, who's a hat, which is the quantum mechanical Hamiltonian. It also depends on electronic positions and on time. And the expression for this Hamiltonian is written on the second line. We have the kinetic term as usual. The second term is the electron interaction term. And the third term is the external potential. And notice that external potential depends on time. Because when we, for example, irradiate material with light, so this is not a static external perturbation. It is time-dependent. So this is why we have time in external potential. And the solution of the time-dependent Schrodinger equation for the main electron system is even more complex than the solution of the static or time-independent Schrodinger equation. As in the case of static theory, we want to go from wave function, which is a very complicated object to charge density, which is much more convenient. So by the analogy to the static case, we go from the wave function, which depends on three n plus one variables where we have three n special coordinates and one time coordinate. So three n plus one in total. So instead, we use just the charge density, which is the function of only four variables. So three special coordinates, labeled by vector r, which is x, y and z, and one time coordinate. And the charge density, time-dependent charge density is given by this expression where we have capital N, the number of particles, electrons, times the integral of the square modulus of our many body wave function integrated over all special coordinates except one r. So this is the density in a specific point in space and the specific moment in time. So this is really the basic object for us. And in 1984, there was this breakthrough paper in physical regulators by Rung and Gross entitled Density Function Theory for Time-Dependent Systems. So this is when TD-DFT was introduced. Again, just to make an analogy with DFT, as you remember from last week in DFT, we discussed about one-to-one mapping between static charge density and static external potential using minimization principle of the total energy. But now we are talking about TD-DFT and we want to do something similar and we will see how to do that. So we want to make a straightforward extension of this same idea to the time-dependent domain. But this is not that easy actually, there are issues because the total energy is no longer a conserved quantity. So in time, total energy changes. And this brings us to the first Rung and Gross theorem and its exact statement is written here, let me read it. So the first theorem tells us that for any system of interacting particles in an external time-dependent potential v external r of t, which can be expanded in Taylor series with respect to time, and given an initial state, size zero over r, there is a one-to-one correspondence between time-dependent external potential and the time-dependent density n of r and t apart from a trivial function of time. So this theorem tells us that like in a static case where we say that there is one-to-one correspondence between potential and the density, both are static, also in the time-dependent extension case, we have the same one-to-one mapping between external potential and the density, but now they are dependent on time. So this is really important. So therefore, all observables can be regarded as functions of the time-dependent charge density, while in the static case, all observables were regarded as functions of static charge density. So, but in contrast to the DFT, in TD-DFT, we need to set initial condition, of course, because the system follows in evolution in time and we need to know the starting point. So this is in the theorem, this corresponds to size zero over r and the initial moment t zero. So as I already mentioned briefly before, in TD-DFT, the variational principle cannot be formulated in terms of the energy, because energy, total energy changes in time. And alternatively, there exists a quantity which is analogous to the energy. This is so-called the quantum mechanical action functional. And this brings us to the theorem two. Theorem two tells us the quantum mechanical action functional which we call A, which depends on N and notice the square bracket. So this is to note a functional. It's not a function. Function, if it was a function, it would be round brackets. But to denote function now, we use a square bracket. So for the quantum mechanical action functional A, we define this way, which is the integral in time from some starting moment in time t zero to some time t one of this expectation value of this operator i h bar partial derivative over time minus the time-dependent Hamiltonian and the expectation value on the time-dependent wave function. So becomes stationary at the exact time-dependent density and zero of r and t, which corresponds to the external potential v external of r and t given the initial state psi zero at time t zero. So stationary, it means that the functional derivative of the expectation, sorry, of the action functional with respect to the time-dependent density evaluated at the end zero equals zero. So a few comments about this theorem. So theorem two means that it is possible to solve the time-dependent problem by searching for the stationary point of the action A. And in contrast to energy in the static case, the stationary point is not necessarily a minimum. It can be also a maximum. So what matters is that it's extremum. So that it's functional derivative is zero, but can be minimum or maximum. And the value of the action itself does not provide any relevant additional information because the action evaluated at density and zero is zero. So it's not really something meaningful for us. Okay, now let's discuss a bit more about this quantum mechanical action functional. So in TDDFT, the action functional A can be decomposed on components like in a static case. So if you remember last week, we brought the functional as a sum of several terms. Here it's very similar thing. We have the kinetic term, Hartree term, exchange correlation term. And if you remember, I don't remember if it was mentioned last week or not, but in standard DFT, the exchange correlation term was called the stupidity energy by Feynman because we bring in that term all that we don't know. So that's why it was called stupidity energy. So we have the same sort of term in this case. And the last term depends on the external potential. And notice that potential depends on time and density depends on time, as was discussed before. In particular, the Hartree term, action term, has this expression. You may notice that it looks very similar to the static case because we have this double integral over special coordinates n times n divided r minus r prime, but now they are depend on time. And moreover, there is another integral over time from t0 to t1. And in order to approximate the unknown action functional, a gross and cone have introduced an auxiliary fictitious system of non-interacting particles that satisfy this time-dependent quantum equations. Again, in analogy with the static case of last week, a cone and sham introduced the auxiliary system of non-interacting electrons moving in the effective potential. And the density of this system, auxiliary system was equals to the density of the true interacting system. So here in the time-dependent case, it's exactly the same idea, but just generalized to the time-dependent case. So now gross and cone the same, okay, let's try to do something similar, let's introduce an auxiliary system, now time-dependent, but non-interacting. And the density, the time-dependent density of this auxiliary system must be equal to the density of the true interacting systems system. And this was described in this next second fundamental paper. And this brings us to the time-dependent cone sham equation. It's written here. We have on the left-hand side, the time derivative of our one-electron cone sham wave function, which depends on special coordinate and time. And this equals to this operator acting on the same one-electron wave function. The operator is nothing but the kinetic term plus the effective cone sham potential. But now this effective potential depends on time. And those non-interacting electrons are moving in this effective time-dependent cone sham potential. This time-dependent cone sham potential, again, can be decomposed on three parts, the Hartley term, exchange correlation term and the external potential. And each of them is written below the Hartley term looks like in the static case, but now it depends on time. The exchange correlation term is defined as a functional derivative of the action, exchange correlation part with respect to time-dependent density, and plus the external potential that will apply to the system. And the density is defined in this way, which is the sum over i, i labels our particles, electrons, which are n, and the square modulus of the one-electron time-dependent cone sham wave functions. So actually there is not anything difficult. It's really, if you learned very well the static case, the standard time-independent DFT, so it's quite easy to understand each term here in the time-dependent extension, okay? Okay, and now we need to solve somehow the problem of the time-dependent exchange correlation potential. So we had to do something in the static case, we had to use some approximations like LDA, local density approximation or GGA, generalized gradient approximation. So we need to do something similar here in the time-dependent extension. And the easiest would be to use so-called adiabatic local density approximation, called ALDA, which is obtained by evaluating the standard LDA potential at the time-dependent density n of r and t. It's written mathematically here. So again, I recall that square bracket means a functional. So v is a function of the density and at the same time it's also the function of the position and time. So it's quite complicated object. So we put in in this object density, which is a function by itself and depending on the value of the function, this expression obtains some specific value. But now we want to approximate this complex object and we say that the exchange correlation potential is actually a function of the density that depends on r and t. So this is a drastic approximation because strictly speaking, this potential should depend on the density at all previous times, not only at this specific moment in time, but also in the, it needs to know the history, what was before, how did it behave in the past, this density? But we say, we make really a quite strong approximation saying that we just want that this potential depends on the density at this very moment. Of course, there are limitations of this approximation such as we cannot describe double excitations, charge transfer excitations and other things. Okay, so that was the basics. We discussed time-dependent Schrodinger equation, two Runge-Gross theorems. Now we need to solve somehow this time-dependent quantum equation. And there are different methods. In the following, we will discuss four of them. Okay, so we have TDDT and we can now define two cases, two regimes. One is so-called linear response regime and the second is the non-linear response regime. So in the linear response regime, we apply the external perturbation, which is weak. This would allow us to use the perturbation theory, expand all quantities in Taylor series and cut the expansion, for example, at first order. And this would give us the linear response because we cut at the first order, the expansion. And these equations can be solved either in time domain or in frequency domain. But the majority of cases are done in frequency domain. I will discuss in a moment. And second, there is non-linear regime when the external perturbation is strong. This is for example, useful to describe ultra-fast physics. And TDDFT equations are solved in the time domain. We will not discuss today non-linear response regime. We will focus on the linear response case. Okay, so this is called in the literature linear response TDDFT or there is acronym TDDFPT. I will discuss in a moment why this second acronym. So let us assume that the time-dependent external potential is weak and it can be written in this form. So our external potential is written as a sum of the static potential. So it doesn't depend on time. This is fixed in time. Plus a second term, we label it with V prime, which depends on time. So this is our starting point. Given this expansion, the density can be expanded in the other series. So we write our density as the zero order term, which is time-independent plus the first order term and prime depending on time plus the n double prime, et cetera. We can continue the expansion till infinity, but of course we don't want to do that. We want just to keep the simplest case, just the ground state case plus the linear, the first order term. And we starting from the second order term and higher order terms, we neglect them. So this is an approximation. This being said, we can write this object n prime, which depends on time, as a function of the external potential V prime external. And the connection between these two objects, the external perturbing potential and the induced time-dependent density. So this link is via the function chi, which is the so-called susceptibility of the material. So this function chi depends on our special coordinates r and r prime, so it's non-local. And it depends on times t and t prime, but actually the difference of time. And we have integral over r and over time. Time-dependent density functional perturbation theory is TDDFT in conjunction with perturbation theory. So we'll add this letter P because we use perturbation theory, as you can see. We expand in Taylor series. We cut at the first order. So since it's first order, it's called linear response. It's linear. There are no non-linear effects. So this is why, and also last Friday, we discussed about DFBT, it was the static density functional perturbation theory, but now we add the prefix TDD, which is time-dependent extensions. So in the following, you will see that to solve these equations, we use all those advanced techniques which you learned last week when we discussed about phonons. So that's why we use this acronym, TDDFT, but commonly in the literature, people also use a name, linear response, TDDFT. Okay, as I said before, we have at least four ways how to solve these equations, linearized time-dependent equations. And let's discuss briefly each of them so that you have overview of what is used in the community. By the way, TDDFT is used in the physics community, but also in quantum chemistry community in material science. So it's very popular method. And each community has preferences to this or that method. Okay, let's start with Dyson method. So we define the charge density susceptibility chi as delta N, depending on RT with respect to delta V external at another position, R prime, another moment in time, T prime. And this object is evaluated at the value of the external potential equally, which equals to V zero external R prime. So this is the definition of the charge density susceptibility. Now, let us use the chain rule for the functional derivatives. I hope you are familiar with the chain rule when evaluating derivatives. Essentially, what we want to do is to say that this object delta N over delta V external can be written as delta N over delta V conchem, delta V conchem, delta V external. So we see we introduced derivative with respect to the conchem potential, but this object is exactly the same as written on the first line. And of course, we have integral over space and time, okay? And we will use the following the chain rule again. Now let's take this object highlighted in yellow and we can further do some mathematical operations with this expression. So delta V conchem over delta V external can be written this way because as we know, the conchem potential is the sum of the heart rate, exchange correlation and external potential. So basically this object is nothing but three derivatives. The derivative of the heart rate with respect to the external exchange correlation with respect to the external and external with respect to external, which gives us the simply the delta function. And next, the derivative of the heart rate part can be further written using the chain rule. So we write delta V heart rate with respect to the density and density with respect to the external potential. But what we obtain now is very interesting because not yet, sorry, just wait a moment. So one more comment that delta V heart rate with respect to the density, we know the expression for the heart rate potential that it depends on the density. So we can evaluate it as this derivative and it's simply charge to the power of two over Coulomb potential and delta function for time. Okay, we can do the similar thing with the exchange correlation part. We can now use the chain rule, which will give us derivative of the exchange correlation potential with respect to density and derivative of the density with respect to the external potential. And we call this object delta V exchange correlation with respect to the density as F X C, which is the so-called exchange correlation kernel. And the interesting part that I wanted to say one minute ago is this one. It's highlighted in brown, that notice that the object that we want to evaluate, the function chi susceptibility depends on itself. Because this is also chi. So we want to find something unknown and this unknown depends on itself. So how to do that? How to solve this? This is what we have. This is the so-called Dyson line equation because susceptibility chi equals chi naught plus some complicated object, which depends on chi, which we want to compute. So this is a equation where unknown depends on itself. And in this integral expression, we have the Coulomb part plus the exchange correlation part. The object which is appeared here, chi naught, is the so-called independent particle polarizability, which depends on our prime and frequency. By the way, here we did the Fourier transform. It's written on the slides. We went from time domain to the frequency domain because it's more convenient to solve this equation in frequency domain. And this object, it can be shown mathematically how to obtain it, but essentially it has this expression. We have summation over indices i and j, the label valence and conduction bands. It depends on functions f, i and j, which are on a quantum occupations. It depends on one particle, one electron, quantum wave functions, static ones divided by h bar omega, which is the energy of the incoming photon minus the energy difference between conduction and valence bands plus i eta, eta is the infinitesimal number just to regularize the problem. So this function has peaks when the denominator is zero. So when the energy difference between the valence band and conduction bands equals exactly to the energy of the incoming photon. So we have zero in the denominator and something divided by zero gives us infinity. Of course, numerically, we cannot deal with infinities, divisions by zero. So to regularize this problem, we add this infinitesimal smearing, typically it's Lorentzian. And this object cannot describe the independent particle excitations. This is what I discussed at the very beginning of today's lecture, when we just solved DFT equations, we obtain valence and conduction bands and we simply say there are transitions from valence to conduction. But we need to go further and we need to solve this complicated Dyson-like equation, which takes into account also electronic interactions via the Coulomb interaction and exchange correlation quantum mechanical effects. Also note that it is convenient to go from R space, the real space, which depends on R and R prime to the G space, to reciprocal space. So we go from R, R prime in real space to reciprocal space where we have G and G prime. And the connection we do for the transform and the connection between two objects is via this equation, where omega is the volume. We have summation over Q points in the billion zone and G and G prime are reciprocal lattice vectors. And we have these phase factors here, okay? So using this free transformation, we can rewrite the Dyson equation in the reciprocal space. And this is how it looks like. So notice that this is not integral equation, it's a matrix equation, it's a matrix with respect to G and G prime, which are vectors. And this equation is solved in practice by fixing Q and the fixed frequency omega. So Q and omega are fixed. And for fixed Q and omega, we need to solve the matrix of size GG prime, okay? And expressions for the column potential VG of Q written here and the exchange correlation part, V exchange correlation GG prime is the free transform of the exchange correlation kernel in real space. Notice that there is, in the literature, you will find that sometimes TD-DFT equations are solved in the RPA approximation. So RPA stands for random phase approximation, which means that we set the exchange correlation kernel to be zero. So we remove exchange correlation and we keep only the Coulomb electronic interaction. And this is called RPA. Okay, I highlight the unknown function and chi-naught can be computed quite simply on top of the DFT ground state calculation. So from DFT, you know ground state wave functions, phi-naught, and you know, quantum energies, epsilon. So you know everything to compute chi-naught once you solve DFT equations. And then solving Dyson-like equation means you solve TD-DFT equation in the frequency domain. A few comments, there are summations over numerous empty states N prime in the calculation of chi-naught. So in the expression of chi-naught, we have sum over k points, but also we have summation over bands N and prime. So N label valence bands or occupied bands and N prime label empty bands or unoccupied. So in principle, we have infinite number of empty states. But of course in practice, we cannot have infinite summations. We need to do some cut at some point. And in practice, we need to make convergence tests with respect to the number of empty states. But this is computationally expensive. And this is the drawback of this method. Second drawback is that there are multiplications and inversions of large matrices because as I said, we have matrices of size gg prime and in practice, the size of this matrix can be easily several hundreds and probably even higher than a thousand. So numerically, this is quite expensive computationally to solve this equation. And finally, the matrices chi-naught and chi of gg prime must be computed for every value of frequency. As I said, q and omega are fixed. So you need to solve this equation for every value of q and omega. So in optical absorption, q is zero. So you have only frequency, but still you need to solve this equation multiple times for each value of the frequency. And this is again, not practical. This is computationally expensive. This method is not, this is very traditional and conventional method in the community, but due to these drawbacks, we don't really use it. It's not available in quantum express because there are more advanced method that I will discuss in the following. But if you are interested in this method, it's implemented, for example, in the YAMBO code, which can be used on top of quantum express, but there are many other codes, like DP code and many others. Okay. Now, let us discuss about the second method, which is called Stein-Heimer method. Okay, the Stein-Heimer method is the following. We write the time-dependent quantum equation, i h bar d phi over d t, contra-mameltonian acting on the, on phi. So we saw this before. I just write instead of individual terms, compact notation, I introduced the contra-mameltonian. And this contra-mameltonian has three parts, kinetic term, external part, potential, and exchange in correlation potential. The second and the third potentials, terms depend on time. Okay, so we have this equation. How can we solve this? So, as before, we're in the linear response regime. So we write our external potential at the sum of the ground state potential, plus the linear order potential, the perturbation potential, v prime of external, depending on time. As I said before, this being said, we can also write the time-dependent density as the sum of the ground state density and zero plus the response term and prime, which depends on time. For the same reason, we can also write the exchange correlation potential as the sum of the ground state potential plus the first order term, the response term, which depends on time. Okay, again, we use some notation. We write that contra-potential is the sum of H naught, which is the ground state Hamiltonian, plus v prime RT, which is the time-dependent response potential. This is the same as written on the previous slide by just introducing these notations, H naught and v prime. Next, we say that the time-dependent consumption function can be written the following way. So our time-dependent one electron, on shem orbital, is defined as the sum of the ground state on shem orbital plus the time-dependent response orbital, phi prime of T, and all this multiplied by this phase factor, where we have imaginary number, epsilon v, which is the conchem energy of the valence band or a pupite band, times T and divided by H bar. So this is how we write our total wave function. And with this at hand, by doing very simple mathematical operations, we plug in the definition of this wave function in the time-dependent conchem equation on the previous slide here. So we put phi in here and in here, and also we put expression for the Hamiltonian. And it can be shown quite easily that we obtain these equations. There are two equations. The first one is the resonant, and the second one is the anti-resonant. So we see that in the second equation we have complex conjugate star here and here. And since we have imaginary number, there is also i changes to minus i. All right, so now we can solve these equations. How to do that? We do the Fourier transform from the time domain to the frequency domain, and we obtain these two equations. So this looks quite complicated. You need to spend some time just to think at each term and the meaning of these equations. But to make a few comments with respect to what you saw last week, you might recognize that these equations look quite similar to the equations shown by Professor Baroni for the static DFPT, density function perturbation theory for phonons. So if we set frequency to zero, so with h bar omega becomes zero here and here, and also potentials, v prime, hard to change collision also, let's say they do not depend on frequency here and here. And we can bring these terms in the middle, pc, v prime, phi not on the right, and write it as a one term. So this will be exactly the same equation that you saw in DFPT for computing phonons. Just in the case of phonons, the perturbation was when we displace atoms, but here instead of moving atoms, atoms are fixed, and our external perturbation is, for example, photons or light, we radiate by electric field. We apply electric field to our system, or it can be some other perturbation. We radiate our sample by beam of electrons or by neutrons. We just change perturbation, and since in our case perturbation is time dependent or frequency dependent, so also equations has to be generalized to the frequency domain, and that's why we have everywhere omega. But apart from that, these are exactly the same equations that you saw last week. And here, let's discuss a few terms. So the prime heart exchange correlation is the so-called linear response potentials, linear response heart rate and exchange correlation potentials, which have this expression. We have Coulomb potential exchange correlation kernel multiplied by the response density. So n prime is the response density. Next, this response density, it can be shown that in linear response regime, the response density is defined this way. We have factor of two due to the spin degeneracy because we consider a non-spin polarized material. So we have factor of two due to spin, and it's defined as product of phi prime, response orbital times the ground state orbital, phi naught, plus phi naught times phi prime complex conjugated at minus omega. So notice that the response density and prime depends on phi prime and phi prime complex conjugated, but this object highlighted in yellow are actually the unknowns of the equations that we want to solve. So again, we obtain a closed loop because we want to solve these equations to find phi prime star, but it turns out that in this expression, we have potentials that depend on density, that depend on the unknowns that we want to determine. So it's a closed loop. And in practice, we need to solve this problem self-consistently by iterations. Another comment is that in this equation, we see operators P, C was a hat. So these are the projectors on empty states. They are defined as the summation over index C, labeled for conduction bands or empty bands, projector on conduction bands manifold, phi C, phi C. And this object can be written as one minus projector on valence band or occupied bands. Again, you should remember that last Friday you saw exactly the same projector with Professor Baroni. So we have the same methodology applied also in the time-dependent case. A few comments about this. So here we don't need empty states thanks to the projector on empty states, PC. So this is very important statement and take-home message. The powerful part of this method is that we never ever need empty states to solve this problem. In the case of Tyson method, which we discussed before, we do need empty states as I showed before, but here we greatly simplify the problem. We don't need empty states. And so this method is computationally more advanced. And second comment, the drawback of this method is that the Stein-Heimer equation still has to be solved for every value of the frequency omega. You see that we have omega and we put certain value of omega, we solve it, then we put another value of omega, we solve it second time, another value of omega, third time, et cetera. So we need to solve it multiple times. And this is still computational bottleneck of this method. And the name of this method apologies, as you see a Stein-Heimer method because this equation is time-dependent Stein-Heimer equation. And this method is implemented in quantum espresso will be the last exercise of the hands-on today. Now let's discuss about the third method, so-called Louisville-Langso's method. So this is really the central method of today's lecture. And this is one of the unique features of quantum espresso. This is not common, it was developed in 2006 by Ralph Gebauer, Stefano Baroni and co-workers. The method was first introduced in this paper and the idea is the following. So we start with the so-called quantum Louisville equation which describes the evolution of the charge density matrix operator. So I would like that you take a moment to think that this is a charge density matrix operator. So notice the word matrix and notice the word operator. It's not simply charge density, which is a function, but this is first, it's not a function, it's a matrix. And second, this is operator. So it's not a function of R and T. This is an operator. So this is why we write a row and we introduce hat to remember this is operator and it depends on time. So for this object row with a hat of T, we've write the so-called quantum Louisville equation where on the left we have time derivative of this matrix operator, row. And on the right, we have a commutator. So this square bracket for two objects is the notation for the commutator. And we have conchem Hamiltonian, which is all operator, depending on time. And again, row of T, which is the matrix operator. Now we want to solve this equation. Notice that we can write the charge density matrix operator, row is a hat of T in the coordinate representation. So we write now a row of R, R prime and T. So notice that we don't have now hat because hat is the for the operator, but now we have operator in the coordinate representation. So instead of writing a hat, we write R, R prime. We have still time. And this object, which is the charge density matrix has this definition, factor of two due to spin, some over valence band or occupied bands and the product between phi and phi prime. But they are at different points in space. One is at R, another at R prime, but at the same moment in time T. So notice that this object depends only on occupied bands or valence bands. We don't need empty bands here. So next, again, using linear response theory or first-order expansion in Taylor series, the first-order, we can rewrite the quantum real equation this way. I didn't, I don't show it here in the presentation, but it's quite easy to do that. It can be shown that the quantum real equation can be written to first-order this form, where on the left we have the derivative of the response charge density matrix operator. And on the right, we have three terms. All of them are commutators. The first one is the commutator between the H naught, which is the Hamiltonian of the unperturbed system in ground state, with the response density operator R prime. The second term depends on the response potential, response hierarchy and change correlation potential evaluated with the ground state charge density matrix operator. And the last term is the commutator between the external potential and the ground state charge density matrix operator. In analogy with the second equation on this slide, it's linear response counterpart for a prime of R R prime and T takes this form. It depends on response orbitals, phi prime and phi prime complex conjugated and the ground state orbitals, phi naught and phi naught star. So now we want to solve this linearized equation. Again, this is just to recap. We've write it in compact form, the linearized quantum removal equation in time domain for the moment. We introduce this operator L with the hat. And this is actually a super operator. Why is it called a super operator? Because this operator acts itself on operator because raw with a hat is the response charge density matrix operator. So L is the super operator because it acts on the operator itself. And this is some of two terms that you show you saw in the previous slide. As was discussed in the case of Dyson equation and Steinheimer equation, we do the same trick. We go from the time domain to the frequency domain. So we use Fourier transformation. And now our linearized quantum removal equation takes this form. So instead of T, instead of time, we now we have frequency omega. So I skipped the demonstration how to obtain this equation, but this could be, this should be quite straightforward. And in this linearized equation in frequency domain, we have frequency H bar omega, which is the frequency of the photon minus Liouville super operator acting on the unknown. And on the right, we have the commutator between the external potential and the ground state charge density matrix operator. And formally, we can write the solution of this equation simply by bringing the operator in square brackets on the left. So we bring it to the right with the minus one to the power minus one. So this is just formally, we can write it like this. And finally, by having this definition, this expression for row prime, we can write the object of interest that we want to evaluate at the end of the day, which is chi, the susceptibility chi of omega. So by definition, susceptibility is the trace of the operator A with the response density matrix operator row with the head prime. So A is the quantum mechanical one electron in median operator, can be any operator. For example, we can take a dipole, okay? So if we put A, replace A with the dipole operator, so what we will have on the left, so our susceptibility would be actually a polarizability. But if you put instead of operator A some other quantum mechanical operator, it would be another observable chi. We will discuss at the end of the lecture other cases for electron and gelos and for magnetic systems. Okay. Now we have these beautiful equations and we want to solve them in practice somehow. And now this brings us to the Langstos method. I don't know if you heard about Langstos method or not, but this is a very powerful method to solve this problem. We define the standard batch representation. We introduce new variables called Q with a index V and P with a index V. So Q is defined as a half sum of all the variables because phi prime and phi prime complex conjugated atomic and omega minus prime, these are unknowns. So instead of having these two unknowns, we introduce another two unknowns, which are Q and P. And P is defined as the half difference between old unknowns. And then we define batches because we take a set of Q's and we label it as a bold Q because it's a set or batch. And P is a set of P variables. Now with this definition of Q and P batches, which are called the standard batch representation, now we can rewrite the linearized quantum Louisville equation and frequency domain in a matrix form using these batches, standard batches. So we see on the left matrix depending on frequency and super operators D and K. And on the right, we have this quite complicated object, projector on empty states, external perturbing potential and ground state orbital. So what are these operators D and K? This, the D operator, the action of the D super operator on the Q is nothing but ground state Hamiltonian minus occupied Ben's eigenvalue. And the action of the K super operator, this is a more complicated object because it contains inside all interactions. So we have Coulomb and exchange correlation electronic interactions inside. But this is quite complicated and it takes time to get familiar with these methodologies. But this is the basics of it. And finally, to solve this matrix equation in practice, as I said, we use Langsis recursion method algorithm. We introduce two component Langsis vectors, capital V, which depends on Q and P and capital U, which depends on Q tilde and P tilde. We write down the Langsis recursion chain where we have these vectors V and U that we just introduced, iteration I plus one. And to obtain these vectors iteration I plus one, this is, we need to know the values of these vectors iteration I and iteration I minus one. Here, beta and gamma are so-called Langsis coefficients. And L and L transposed is the Louvillian super operator. So L is simply the expression over here on the left without frequencies. So if we remove H bar omega and we put zero and zero and of diagonal we have minus D and minus D minus 2K, this is called L, the Louvillian matrix. And in practice, we solve this Langsis problem by iteration recursively. And what we do actually when we solve it, we generate a so-called three diagonal matrix. It's very sparse matrix because many elements of the matrix are zero and even those on the diagonal, only these elements are not zero, beta coefficients and gamma coefficients. So this is why it's called three diagonal, but moreover on the diagonal we have zero, which is even better. So once we have this matrix of size n by n, when n is the number of iterations, let's say 1,000 iterations. So we have 1,000 by 1,000. Once we have this matrix, we can finally compute the susceptibility chi in the post processing step, which is not expensive at all this step. And we have h bar omega times the identity matrix, minus our matrix T, all these to the power minus one, multiply by E1 of n. So E is just a unit vector n dimensional vector. So 1,000. So the dimensional vector is n, all elements are zero, except the first one, which is one. And zeta of n, I have this complicated definition. And these objects zeta of n are computed on the fly. So when you solve lanses, you do lanses iterations, on the fly, you simultaneously compute also this object inexpensively. Notice that we don't need empty states in this method. As I mentioned before, because we have projector and empty states. So again, this is very powerful. No empty states. Second, these three diagonal matrix T of n must be computed only once. It doesn't depend on the frequency. So this is a second very powerful advantage of lanses method that even though you see frequency here in this equation on the top left, but in practice, actually, it's just a parameter. We don't solve this equation multiple times for each value of frequency. So frequency is actually never used when we solve lanses of recursion. And we obtain the matrix of T, which is, I mean, we don't need, it doesn't depend on frequency. The frequency enters only the very end of the post-processing step, you see H bar omega. So once we have T, which doesn't depend on frequency, now we do computation of chi for different values of frequency. But lanses itself does not depend on frequency. And yeah, and what I said in the third point is exactly that this last post-processing step is absolutely inexpensive. If lanses takes, it's really the main computational bottleneck of this method. Let's say it can take 30 minutes computation. So the post-processing step can take just two, five seconds. It's really inexpensive. Okay, and let me mention very briefly the last method. It's the Casida Davidson method. So this last method is used mainly to study molecular system. So it's popular in quantum chemistry, actually. While Dyson and Staheimer, they used mainly in solid state physics and lanses is effective both for molecular and also extended systems. So, okay, briefly about Casida Davidson. I've write down equations that you saw already before is resonant and anti-resonant linear response equations. We say the following, the right-hand side, which is the perturbation because we have external perturbation potential. We set it to zero and we can write the remainder of this equation in a matrix form. It's called Casida equations. It's from the quantum chemistry community. Maybe you're familiar. And we have this matrix equation and want to diagonalize the matrix on the left, sorry. We want to obtain eigenvalues and eigenvectors of the matrix on the left. So we diagonalize with the Davidson method. So that's why this method is called Casida Davidson method. And the lowest energy eigenvalues is the first excitation speaks. So this is especially a very practical method for molecules because in molecular systems absorption spectrum, you easily compute first eigenvalues and then when you go higher and higher in energy, the density of eigenvalues becomes higher and higher and this method becomes computationally more expensive. But if you apply this method to extended systems, solid state systems, in my view, this method is not really efficient in particular for optical absorption because in solids, the density of eigenvalues is very high. So this method is not really practical. All right, so this was the overview of four methods. And now let's discuss about applications of these methods in particular optical absorption, electron and Jullos and in elastic neutral scattering. So let's start with optical absorption. So far, the external perturbing potential was not explicitly defined. It was considered as some generic external perturbation. Now we are talking about optical absorption. And in this case, the external potential is defined this way. We have minus E, where E is the electronic charge. Capital E in bold, this is the electric field vector, which depends on frequency. Scalar product with R, which is the position coordinate. So with the cartoon, you see that we have molecule when we irradiated with light. We have photons with energy H bar omega. So with this definition of the external potential, the object that we want to compute is the dipole, bold D, which is the dipole vector. By definition, dipole is the trace of the position operator R, times the response charge density matrix operator, row hat, a row prime, the one we discussed before. And in the Lluville-Langstos method, when we discussed about Lluville equation, you remember that the expression for row prime is the resolvent, I didn't mention it before, but it is the resolvent H bar omega minus L to the power minus one times the commutator of the external potential and the ground state charge density matrix operator. So now in this definition, we plug in the expression for the external potential, V prime. So we put this here and then we can define the dynamical polarizability tensor, the dipole, alpha ij. So the connection between the dipole D and the electric field capital E, so the connection is via this dynamical polarizability tensor. And the expression for alpha is obtained by using first two equations. You see that alpha of ij, where ij are just Cartesian components, x, y, and z, so it's three by three matrix. So alpha of ij is defined this way, minus E and position operator at point i, or sorry, in the direction i, it's x, y, z, resolvent multiplied by commutator and position operator, j. Okay, so we have this definition and we can emulate this object. For example, using either L'Uville-Langsov's method with the call turbo-langsov.x or using Casita Davidson with the call turbo-Davidson.x. So during the hands-on, the first two exercises will be exactly computing the dynamical polarizability alpha using these two codes. So Oscar will guide you through the hands-on. Just to give you an example, with this method, L'Uville-Langsov's, we can compute the absorption spectrum of the caffeine molecule shown here. So on the y-axis, we have the polarizability that we compute on the x-axis is the energy, if you want the photon energy. In this figure, we see the convergence of the absorption spectrum. If we do 500 L'Uville-Langsov steps, we see very big spikes. This is, of course, very, it's not converged spectrum. You can increase number of iterations to 1,000. So we go from red to green. We see that spikes are vanishing little by little. Little by little, if we increase to 2,000, they get even smaller. And at 10,000 steps, we obtain almost smooth spectrum. So this black spectrum, the converged one, can be compared to the experimental spectrum of the caffeine molecule. And on the right, I show another example using Cassida-Davidson method. This method was developed about five years ago and applied to these molecules, polyargonene, cyanine, peonene, et cetera. And we show here polarizability as a function of wavelengths in nanometers for each of this molecule. And using this absorption spectrum, it's also possible to compute the corresponding color. It's shown here on the right. Another interesting application that can be done with quantum espresso is to compute absorption spectra of molecules in solutions. For example, of course molecules exist in some solutions like in water, for example. So to simulate this, what we can do with quantum espresso, we take our molecule, a cyanine molecule in this case, about 60 atoms, and we put around molecule waters. And this system is quite large because in total we can have more than 100 molecule, maybe 200, I don't remember exactly. But the system overall is quite large. And if you want to perform straightforward simulation, it will be very expensive. So we can do some simplification of the problem. Namely, we can replace water molecules around our molecule with some homogeneous medium. And we put the dielectric constant of this homogeneous medium of the real atomistically resolved medium. So we get out all water molecules and we put in the homogeneous dielectric. But now the question is, how do you interface your real molecule with this solvent, so-called implicit solvent? So this is quite tricky. And in this work, we used the method which is called self-consistent continuum solvation model developed by Olivier Rondriousi, Ismele Dabeau and Nicola Marzari about 10 years ago. And there are actually two interface. It's actually a smooth transition, but we need to define two parameters that control the smooth transition from the molecule to the medium. So inside of this first iso surface, you can see it in some hue of green color. So inside the dielectric constant is one. So the molecule is sort of in vacuum. But then it smoothly goes from vacuum to the dielectric constant of the implicit solvent. And using this approach, we can compute the absorption spectrum of this molecule, not only in vacuum, but also in the presence of this solvent surrounding our molecule. So there are screening effects of the electronic transitions of the molecules. When you excite electrons of our molecule, of course, you have some effect by the environment. And here we see two calculations done at the PBE level, which is GGA, and using the hybrid functional TD-DFT calculation. Comparison was done with the Gaussian code, which is of course very popular in quantum chemistry community, computed in vacuum and also in water. So we can see really the effect of the environment on the absorption. So we see changes in the intensity of peaks. So going from red of vacuum to green, which is with water, and also the shape and even position of peaks. And when you switched from semi-local PBE functional to hybrid functional, you see also significant changes. You see the blue shift of peaks. So you see that it's also important to use good approximation to the exchange correlation functional. So in quantum chemistry community, people don't use PBE because it doesn't give satisfactory results. And people use hybrid functionals like B3 lip. And this can be done with quantum express, as you can see. If you're interested more in this implicit solvent model, you can go to this website, quantumenviron.org. There are many, many more interesting things that you can do with this field of electrochemistry. Speaking of solid state systems and optical absorption in these systems. So this is the result for bulk silicon. You see the intensity of the absorption as a function of the frequency of the photon. In red dots, we see experimental absorption. Okay, if we use TDDFT at the RPA level approximation, if you remember I mentioned during my talk about RPA, which is the random phase approximation, that we completely neglect the exchange correlation effects. So we set F exchange correlation to zero. So we see the spectrum that we obtain is absolutely not satisfactory. No, no, it's a bad agreement with experiment. Then we can try ALDA, which is the adiabatic local density approximation that I also discussed. We have this orange spectrum, still very unsatisfactory absorption. Okay, so if we try more advanced method, we go beyond TDDFT and we can try GW, which we didn't discuss during the school. GW still doesn't give satisfactory results. And only when you use many body techniques like beta-salpeter equation, you obtain a rather good agreement with experimental spectrum. If you're interested in using GW and beta-salpeter equation, you can use, for example, YAMBA code or maybe Berkeley GW. But this shows you that TDDFT, using local or semi-local functions, does not give satisfactory results for the optical absorption of solids. You can still use TDDFT with more advanced exchange correlation functions, which are non-local or maybe frequency dependent. That gives much better absorption, optical absorption for solids, but the computational cost of TDDFT increases very rapidly. You're going essentially towards the many body technique like beta-salpeter. And again, if you want to experiment with these advanced exchange correlation kernels, you can try, for example, YAMBA code. And also there are some studies where people try to use hybrid functions with TDDFT, but basic hybrids do not give satisfactory results like HSE06 or PB0. So the electric-dependent hybrid functions that I mentioned on Thursday, that do much better job. There are some recent works by different groups, in particular by Pasquarello and Cressa. They use the electric-dependent hybrid functions with TDDFT, and this seems to give much better optical absorption for solids. Okay, we're 10 o'clock, maybe 10 minutes more if I can just finish. Yeah, sure. Very quickly. Okay, electron-angela spectroscopy, very quickly. Okay, now what we do, we irradiate our sample, not by photons, but by electrons. So we irradiate our sample with electron beam. Now our external perturbation is the plane wave, because free electron can be described as a plane wave that propagates space and time. The susceptibility now is the charge density susceptibility, which described the density-density response. We have now in this expression we saw before, instead of having position operator R and R, we have density and density operator. By computing this susceptibility, we can emulate then the inverse dielectric function, epsilon minus one, at a specific wave vector Q. Because when electron is scattered in the sample, this electron transfers its energy and momentum to the electrons of the sample. So the transfer of the momentum is denoted with the Q vector. And by knowing the inverse dielectric function, we take its imaginary part, and this would correspond to the double differential cross section, which is measured experimentally. So what we compute, we have the direct link with what is measured in experiments. So this double differential cross section measures the angle in which electrons are scattered and which energy is in some interval in energy. And this can be computed with the turbo EOS code. We will discuss during the hands-on some example for silicon, if you plot this object, which is the loss function as a function of energy, you obtains the loss spectrum for silicon. And this intense peak is the plasmon. So plasmon is the collective excitation of electrons. It's a quasi-particle. We will discuss more during the hands-on. Also example for bulk diamond. This I can probably skip. We can also include relativistic effects like spin or be coupling. When computing electron to loss spectrum in bulk bismos, compare good agreement with experiments, but I skip this. Last example, for magnetic systems, we can also discuss when we take magnetic sample, for example, bulk iron or bulk nickel and we're radiated by neutrons. So neutrons are also in elastically scattered. In this case, the perturbing potential has this expression where sigma is the power matrix, b is the magnetic field, and this is a magneton ball. The magnetic susceptibility defined this way, where we have M, which is the magnetization operator. We compute the structure factor S by knowing susceptibility chi and again compare with the double differential cross-section. The code to compute this spectroscopy will be available in the next official release of quantum espresso. Just example of application of this code. We computed magneton dispersion in bulk iron and bulk nickel. So magnets are also collective excitations, but now of spin. So magnets are collective excitations of spin and they're also quasi-particles. So if you compute magneton spectra, a different transferred momenta q, and we plot the position of the magneton peak is the function of the transferred momentum q. We obtained this dispersion, magneton dispersion, and here we compare this work, this orange dots with experiments, which are this rhombi. We see good agreement for iron, but for nickel, we see that TDDFT, this work in previous works overestimate the energy of magnets. This is because of the approximation to the exchange collision function. Very advanced application. We applied quantum espresso to compute magneton spectra of the monolayer chromium thiodeide. So on the left we show the magneton dispersion of the monolayer of chromium iodine III. So the energy of the magneton is in the long high symmetry directions. This is the intensity map. And very interesting that experimentally, there was observed a gap at the direct point in the magneton dispersion you see here. The intensity is vanished here. And again, using all advanced tools of quantum espresso, we used DFPT that you learned with Professor Baroni last Friday. We computed phonons for this material. And we found actually that when there is a gap in the magneton spectrum, actually the exactly same energy interval, we have phonons, phonon bands that you also know how to compute. So this seems to indicate that there is interaction between magnetons and phonons by doing some detailed investigation by developing a model Hamiltonian. We managed to show that actually there is interaction coupling between magnetons and phonons. If you superimpose the phonon dispersion in blue and magneton dispersion, actually in this case it's parametrized. And we apply this model Hamiltonian including the magneton phonon coupling. We see that there are some splitings in the magneton dispersion. And if you do it, visualize it in three dimensions. And you see this red as a surface is the magneton dispersion which is cut by phonons. So this is really, you can see that you can do really very advanced studies using all these tools available in quantum espresso such as computation of phonons and also different spectroscopies for computing magnetons, glasmans and many more. This is a brief summary. You can see it on the slides. I think in the interest of time we can skip this and answer a few questions. Yeah, hi, thanks Yuri. Thanks for this very clear and great talk. There are two questions from the stream. I don't know if you can read it if you want me to record it here. Yeah, you can please read. Oh, maybe I can see. So TDDFT cannot describe double excitations or this applies only to TDDFT LDA. Yes, so what I said, I meant TDDFT LDA. So, I mean double excitations is when your system observes two photons, one after the other. So I'm not an expert in this field double excitations but I think you need to use probably more advanced functionals but yeah, I'm not an expert in that field. And then what else? There was just another one from streaming I think. What is optical electronegativity? Delta chi star. I'm not sure I understand what do you refer optical electronegativity. I never mentioned during my talk about electronegativity. So maybe we can clarify during the hands-on. Is it also possible to include time dependence or phonons quantum espresso? So this is a very good question. So currently, no, it's not possible. So phonons are now computed at a static level. Extension to the time domain is, I think it's complicated and not currently done. I think there are some efforts ongoing in the community but the best would be of course to have phonons and magnets, for example, both described at the same level of theory, time dependent. And that would be naturally, that would naturally allows us to describe magnet phonon interaction fully I've been issue without any model hand movements. But there is still some work to do in the direction. It's quite complicated. And can we study extended systems with TD-DFT with quantum espresso? Depends on the spectroscopy. If you want to study electron angelos spectroscopy, yes. TD-DFT with local semi-local functionals describes pretty well plasmons. But if you are interested in optical absorption for solids, unfortunately, TD-DFT with local semi-local functionals does not give satisfactory results. Either you need to use advanced functionals, exchange correlation kernel, or you need to use GW and Betasalpeter with YAMBO code. So I would rather go and use YAMBO with GW and Betasalpeter. Is the magnet phonon code in mainstream of QE 6.7? It's not available in 6.7. It will be available in the next official release. Okay, there are two questions from here. So I will let Niasce to speak if you may. Yes. Thank you for this very nice talk. So usually many body methods like the Betasalpeter equation scale very badly with k-points and they require many k-points to be converged. So how does that apply to these methods? What is the requirement of k-sampling for these methods to be converged? Okay, do you refer to which spectroscopy? Magnans. Magnans, okay. Yeah, so there is also summation over k-points. So it would be great to use symmetry to reduce number of k-points. Unfortunately at present we have the code which does not include the symmetries. So it's really expensive. Also the limitation of the code that it's used only with norm conserving pseudo-potentials ultra-soft and PAW are not supported. So it really has some basic features and it's very computationally expensive and there is still lots of improvements that has to be done. So if you are interested in playing with this code, it will be still very expensive. Okay, and sorry if I can just ask one very most question. Is there any interpolation methods for the dispersion as we saw for phonons? Ah, okay. No, we don't have anything now. It's actually interesting points, but at present we don't have anything implemented. Okay, okay. But this is a good point, thank you. Thank you very much. Thank you. Okay, there is another question from Daniel Torres if you can unmute. Hi, thank you for the presentation and I have this question. Caramel molecules and chrystals have the property of rotating the polarization of light. Can this effect be calculated in quantum espresso with one of the methods we saw today? Okay, as far as I understand, no, this cannot be done at present. But this is indeed very interesting point. But I think at present this cannot be studied. Okay, thank you. Thank you. We have one more, Yuri, from Ryan Dudi. You can unmute now. Yes, thank you for the talk. I was just wondering, you introduced the external perturbation in the length gauge. Is there a way, can you implement it in different gauges such as the velocity gauge and is there any advantage in doing so? Okay, I'm not very familiar with this. So, concerning the advantage, unfortunately, I don't know the answer. I'm not familiar with this. Regarding the point whether we can implement it. That was the question, can we implement it? So, well, yes, now it's not available, but we can try to do that. If I have some better answer, I'll try to post on Slack. Apologies. I don't know if Daniel Torres wants to ask another question or if he's still arising at the end from earlier. Maybe you can speak. No, sorry. Okay, no other question. So, please, I have one question on YouTube. Yes. The real-time TDDFT implemented within QE. So, in the official distribution of quantum espresso, it is not available. Historically, there were different efforts of the real-time TDDFT, the real-time propagation. I think there was some work by Ralf Gebauer. There is work by David Ciresoli. Probably those codes are somewhere around. So, we can try to investigate, but in the official distribution, we don't have real-time propagation. If you are interested in real-time propagation TDDFT, you can try to use some other codes by the group of Rubio, for example. They use real-time TDDFT. Daniel, if you want to take the last from streaming, there is the last question from streaming. Yes, yes. We can also calculate ills by epsilon dot x. That's true. There is another code called epsilon dot x, which is a very simple code, but it's based on the independent particle approximation. So, it doesn't take into account electronic interactions like Coulomb and exchange correlation. So, if you compute with epsilon dot x, you will obtain ills, but that would be lower accuracy. It's based just on DFT wave functions and eigenvalues, so that will be very approximate. With TDDFT and turbo ills code, you can have a higher level of theory to compute ills spectra. So, I would recommend to use TDDFT. Okay, Yuri, there was one, I don't know, Nikita is still here. There was one person that was rising in the end. I think he closed it. I think this is the last question, then we close for the break. So, it's from the audience here on Zoom. Nikita Shechevlanov, you can speak now. Okay, I'm just interested that if we apply the laser field, laser field should affect through the optical nonlinearities. And it means that you should accomplish the system with a question for description of electric field. Is the electric field equation is complemented the time-dependent function theory at present implementation? Okay, so I think, no, currently not. So, if you radiate your sample with electric field, I think if I'm not wrong, you're referring to intense laser field. Yeah, well, optical nonlinearities really affect laser. And you're referring to nonlinear effects, correct? Yes, it is. Yes, yes. So, this cannot be done currently. So, in the beginning of my talk, I discussed that we have linear response regime and nonlinear. So, all what was covered today was all linear response. So, indeed, if you radiate your sample with intense laser field, you have nonlinear effects, and the way to go, you need to use real-time propagation TDFT. Again, there are some works by Angel Rubio and their code octopus. So, if you are interested in modeling that phenomenon, I would use that code. In particular, you can interpret some ultra-fast experiments on the probe. But it was quantum express, so this cannot be done currently. Okay, thank you. Thank you. Okay, very good. Thanks a lot to you again. And we can now break and be back with the lab, which I think is still yours, right? See you in a bit. See you, thanks. Thank you very much. Okay, so maybe I see Oscar is already here. So, we can gradually start. So, thanks Oscar for presenting this lab session. I will leave you the stage. I don't know if you want to share the slides, if you want to do the test. Before we start the streaming. Okay, everything seems fine. If you want to put the presentation mode. No, because I have the machine. That's what you have. Okay, that's fine. Okay, good luck. Good morning to everybody. I'm Oscar from CISA. Now you can see the tutor that helped us in this section. And thank you very much for that. For your help. Okay, this is the end zone of the linear response. So, maybe I take care about the part one. So the, I talk about optical absorption in molecules with a turbo, David's own and turbo answers that you represent before. For this section, we use the, the HPC. You have to get full and I think to HPC to have updates or all things. Okay. The part one. I already said, we can, we use turbo David's own and turbo Lancers code in quantum expression. And example one. We will use a turbo David's own for computer that to compute the absorption spectra in molecule. In the example two, we will use a turbo ledger to compute the same. Okay, the turbo David's own program is one of the, the code that compute the solve the linear response equation in quantum expression that introduce Yuri this morning. Okay. First of all, we can, we, we will compute the, the, the spectra with the independent particle approximation that is the, the easiest approximation that we can use. And then we introduce the, the interaction and compare the best practice that we obtain later. Okay. So you go David's own use the casino that it's on algorithm that you represent this morning. And then, oh, go on. This is the, the, the turbo that it's on, then that it's on casino equation that you represent this morning and this is the, in what is the interaction term. So first of all, we neglect this part to compute the, the spectra. Okay. In the independent particle approximation, this is the simplest approximation that you can use the energy of excitation is, is the difference between again values from the state and the occupied state and the intensity of the excitation is the transition probability that this is presented here in the middle of the slide. Okay. First of all, we have to start from the ground state. So we have to compute the, the ground state for our model. In this example, we compute the spectrum of density. Okay. This here, you can see the, the PW input, but I think that in this week, in the last week, you, you're now able to, to run the PW. So move on the, on the, on the terminal. For example, one here, you can see the PW input. This is to compute the, the ground state. Turbo Davidson input, the to, to run turbo Davidson and solve the linear response. Turbo spectrum input. This is the post processing and compute to, to, to compute the, the spectrum and also you have here the, the plot script to, to plot the, the spectrum with, with new plot. Now we can check the, the input of PW. Okay. Here you can see the atomic position for the benzene. I think you now know everything about this type of input. Here you can see the number of bands that compute the turbo Davidson needs some virtual, virtual states in different with the turbo lanche that normally don't need the, the virtual, the virtual states. So now compute on HPC in remote, run the, in remote the, this, this PW in the remote. So remote and PRI and PRUN PW X minus in PW. Maybe I, yeah. Okay. In this, I think it is already done because the PW is here I open also and page in KRC the output from HPC. Okay. Only the only three, three seconds. Okay. Converge internal iteration here. You can see the, the energy, the total energy. So, okay. The, the output is converted. Now from here, we can move on to both. Okay. Okay. Now we have the, the output of PW. So we have the ground state in the, in the output and conversion. You can see the. The highest occupied and the lowest unoccupied level. So in an electron volt, maybe here you see the, the almost state and the new most state. Okay. In the independent particle approximation. Okay. So before the energy of the excitation is the difference of the eigenvalues. So the, the energy gap is the difference between new more and almost. So in this case, 5.1. Okay. Now. Okay. So let's go to run turbo Davidson with the, with the independent particle approximation. Now check. The input. Okay. This flag. If the spectrum equals true in default is forced. When you put the true activate switch on the independent particle approximation, not now we can we run with the and independent particle approximation. This is an, an also let variable for this case. In the next version, you didn't use this, this variable. Okay. Now go to run the, the turbo Davidson. So we can do the same. Now we use. With the independent particle approximation, the, the, the this computer is very fast. So I think it is already done. I haven't, I'm wrong. Sorry. I already introduced them. I didn't ever sink the, the, the turbo Davidson input. So I already run the. With the, the, with the interaction part. So maybe we, I have to, maybe it's already finished because it's not. Sorry. Yeah. Because take two minutes with the interaction. Maybe I can do a remote. I just finished. So, sorry. Re-update. Okay. Now maybe use update. Maybe I'm wrong something. I don't know how to change the input file. Oscar, but can you edit directly on the cluster the input file? Yeah, I can edit here. No, no, no, I, because in the, I, I tested the, maybe. Okay. Now I update on the cluster there. And you have to update the, the file. So I'll take the output from this is the output of turbo Davidson. So, so you can see very fast with the independent particular approximation. And here you can see the couple of excitation with the number of occupied state and number of the virtual state. This is the, the difference of energy of the eigenvalues in the feedback energy. And this is the state of the, the turbo Davidson compute. Now go to computer. No, go to. Now go to the, the post-processing steps. So you can, we have to use turbo spectrum to build the, the spectrum of benzene from this file. You can, the turbo Davidson create also the eigenvalue file that you can see in the, you can see in the HPC you can see the file that the turbo Davidson create, turbo Davidson create this, this output and the eigenvalues file. You have the difference state of the energy in the, to see the turbo spectrum file. You can see also these prefix out here is the, is the same for quantum exploration for PW. These, here you can see that the method that you have to, to use to, to compute the spectrum. You can use, there is some answers and so on. Okay. Here, start and end, you can see the, the energy of the spectrum that you want to plot. The increment is the distance from point by point that you can plot the, the spectrum, preferences. Epsilon is the, your ancient broadening because you can, you plot the, the spectrum, the turbo, turbo Davidson compute a root spectrum, but you, you can broadening the, the spectrum, the root spectrum with the Lorentzian and the epsilon is the broadening of this Lorentzian and this is defined that you, when, when a turbo spectrum take the eigenvalues. Okay. Run the turbo spectrum, maybe you can define on HPC. Turbo spectrum you can run also in, in your lab because it is very fast, but don't take everything from the, from the HPC and run on, on the cluster. Okay. Okay. Now the turbo spectrum create an output. Here is only read, read the file and it's okay. If print some error, if there are some error and also this, this file that contains the data you can use to the spectrum. This is the spectrum. You can see you have the, the energy and the intensity of the spectrum. So you can plot this and this is the, the three components, the spectrum. Okay. Now take this file from the, from the, from the cluster. And now we can use this file to, to use the plot scale to plot the spectrum with the new plot. The new plot created this file. And so here you can see the spectrum. And now check it. Okay. You can see the energy of the first excitation at 5.1. The same different that you, we can see from the PW output. If you want to see the script of, of new plot, you can see the range, more stuff from X range, Y range. And so this is the, the, the label of X and Y axis and the titles are, but this is, this is the new plot stuff. Now this is the independent particle approximation. Now move on to switch on the, the, the interaction. So we can, we have to modify the turbo that is on the fixed input. We introduce all of this stuff. So put, first of all, put force if the spectrum. So now switch on the, the interaction. Noon end is the number of the game value that we compute. How, how you would have said before this morning. Kazida, Kazida formulation is good to extract a few parts of roots. So if you increase this number, the computer, the expand, the calculation increase the, the, the, the exam and the expensive, become expand, very expensive to extract more and more roots. This is the number of the initial vector. The default is double of the again values that you compute. This is the maximum number of basis for the sub-basis. So it is to use to check the memory that turbo that it's on use for computing this calculation. This is the convergent threshold. This is the minimum and maximum value of, of a frequency that you can plot the spectrum and similar to the, also the name is identical to the turbo spectrum. So start to finish the step. Absolutely become a broadening and reference is, is the, the value you can expect the, the, the, the peak. So you can extract the 15 again values near to this value. Okay. Now go to modify the input, the turbo that is an input. I think from, from a read file, you can take all stuff here to in it is the same. So it is not necessary rerun PW. If you save the PMP here from PW is you can use the same. So now I have updated the turbo that is an input. So I think to HPC. Okay. And now remote and I run. Okay. Now I have to wait two minutes. There are some questions. Maybe we can use this time. Oscar, there is a question on YouTube. Oh, okay. Do you want me to read it? Yeah, yeah. So the question is any particular criteria or limits for a number of eigenvalues to be calculated in step one, apart from computational cost. Is it possible to compare with the independent particles approximation or with the interaction because I think depend on the system, the number of eigenvalue that you can compute with the turbo that it's on. So more large is the is your system. And the answer is extra and number of eigenvalues that you need because maybe in benzene, the first excitation is also the highest intensity excitation in this case. But in other case, maybe you have to extract 30. Excitation with a little intensity and maybe the 31 is the highest intensity. So depending on for the system, how many eigenvalues you have to extract. So in the independent particles approximation, it's very long because it's not in the middle. Oscar, there is another question in the chat of zoom. So it's written, I think the reference tag was missing in the modified input file. Does it matter? Will it find the peaks anyway in this case? In this case is not. Don't change. Don't worry because, okay, in this case, I extract the lowest excitation. So in this case, it's not necessary. Yeah, so if you don't specify a reference, it means it's zero by default. Okay, finished. So I take from the HPC, the output, you can see the difference. Now take two minutes. So the difference between independent particles approximation two seconds to with the intellectual two minutes. So you understand the difference. Okay, here you can see all every excitation every 15 excitation. I want to show you the. So here you can see the convergence. Okay, here you can see the difference between independent particles approximation. That is an iteration. So, so that is on the organization. We finished in 29 steps. The current basis is 42. The total basis that we did in this run is 361. Okay. 15 excitation. So now go to modify the. Turbo spectrum. The turbo spectrum input because also we have to. Yeah, I am here. If you check before the turbo daily some created Benzine DFT. Again, why not so? Because we can we have to set to true. If the FB spectrum. No. Now create on only Benzine. Okay. So we have to modify the turbo spectrum and say it. Read this file, not this one. Okay. We want to go back to and create now this one. Copy from HPC. That file. Now I have that fight to. Plot with the new plot, but I have we have to modify. Also the script of new plot. We have to introduce this. We have to modify the last, the last line of script. Plot. Then it's better to reduce the, the highest. The new four wire range because the intensity is lower. So you can see the people with the one number. Why? Plot is a little bit. So maybe we can reduce. More than the wire. Okay. You can see that here that the blue peak with the interaction. The red one is independent particles and approximation. And blue one is with interaction. In a computer with interaction. Okay. Okay. You'll see the, also the shift, the blue shift. And decrease of the intensity of the. The intensity of this peak. Now, later. So I'm on faster. I think. Okay. Now we move to the example too. With the, we use a turbo lens shows program. That you read before said before this morning. To go. Lancers doesn't need the, the empty state. And to, to compute the, the spectrum. And also we compute all spectrum. One. A single calculation. This is the, the. Equation that you represent this morning. So I'm leaving this. This, this step go to the directly in the example. So first of all, everything we have. We have to compute the PW. Now. There isn't. And the, and the flag in the. Because we didn't need the, the virtual state. So we compute only occupied states. Okay. One. Two. Two. Example. So. The reason. So. Run. The second step. After. Run. Pw is run to. Program. This is the input. Of the. Input. So you can see, but this. And the input is the same. And the other code with three things out there. This true is important. This is the rest of the step. This is one. Every one hundred iteration. The code print. So. If something. The, the, the program. After 250 iteration. You have. Save. You have saved. 200. So you haven't lost everything. Okay. So. Every one hundred iteration. Then the program printer. Start. And this. Input. The, the. To restart the files. When, when you want to restart from a no. You can put the truth. And restart on the last iteration that you. Compute. The other control. The. In this case, we compute the x direction, but also with two is the y direction and three is the direction. So go, this is the same, so go to run, when you finish the PW, yeah, okay, now go to run the turbulent source, yeah, okay, the turbulent source. Code, so also this step takes, I think, I can show you the output, this you can see every iteration, iteration, every launchers iteration, so for every iteration, print an alpha coefficient is the diagonal one that you really show you is every is zero, you know, this is the diagonal, this is the beta and gamma, so this is the three diagonal, the coefficient of the three diagonal matrix, so you can see every iteration, two ball answers print alpha beta and gamma, and this is the, so this is for the, to compute the spectrum. So now the turbo spectrum, read this coefficient and compute the spectrum, maybe the job is done, so compute the five hundred iteration of for BNZ, now, now go to turbo spectrum, right, we can see, prefix and now they are the same or before, okay, this is, it are max zero is the number of excitation that you need to compute the spectrum, so we compute five hundred iteration, here we put five hundred iteration, but also you can put a less of, but not more than the, than an iteration that you compute. Intermax is another number for the iteration, this case is the same of the intermax zero, because we now don't use the extrapolation, but then we introduce the extrapolation, so this, these values can increase. Epsilon is the same for the turbo devison, so it is the broadening of the Laurentian, start and end is the value of the range of the spectrum that you compute, the increment is different point by point for the spectrum and the pole is the direction that you compute, so we compute the x direction, so here you have to put the x direction, if you put two or three, the categorization graph that compare an error. Okay, run turbo spectrum, okay, also in this case turbo spectrum create, turbo spectrum output to show a error or something, and this file dot dot that, so we copy from HPC this file and plot with the move dot, okay? So we can set the plot spectrum, so you can see the X-rays, the benzene, this is the file that read, okay? This is the spectrum that we compute with 500 iteration. 500 iteration, you can see we have a factor with one calculation. So this is the pro of the turbo lunges. You compute all spectrum by one computation. We have struck only 500 iteration. There are possible that you, this is, okay, then, as I said before, it matches 500, so you have to use it matches 0, 500, and this is without extrapolation. Also, you can compute more iteration. So from 500 to 1000 to 1500, maybe you can try alone this because the time is not okay for me, but an important thing you use restart true. So now you have compute 500 iteration. So if you put here true, restart equal true, then compute start from 500 and not from zero. You can put here 1000 or 1500, restart true, so compute from 500 and one to 1000 to 1500 iteration, okay? And here you can see the difference from 500 to 1000 to 1500 iteration, okay? That, like Yuli showed before in this morning, this factor is not really converged. So you can see the spike, the peak very, with very spikes. So this is not converged. So you can compute this, so you can compute more and more interaction until the convergence of the spectrum, or you can use a trick in the four two volantios. Now, here you can see that the coefficient lunches, that I showed before in the output of lunches, not every iteration you have, alpha, beta and gamma coefficient of the diagonal, three diagonal matrix. Here is the plot of the beta coefficient. You can see beta, okay, here. There are two different types. This is the odd one and this is the even one. So you can see the coefficient is around half of the kinetic energy and put around 50 river, okay, in this case. But you can see that coefficient, after a start point, we can see this change more, then it's almost the same, it's almost a plateau, no? So we can use an extrapolation method to increase the number of coefficients without compute this coefficient. So we compute a certain number of coefficients. This is the three diagonal matrix that we compute. Now I compute 500 coefficients, no? And we use the extrapolation to increase the number of coefficients. The coefficient that we extrapolate is the average of the beta and gamma coefficient that we compute. So more coefficient you can compute more good is the coefficient that you extrapolate. So, but with the increase of the number of the coefficient, the spectrum converts faster. So you can, this trick is very good to obtain the converged spectrum without compute in a thousand and thousand and thousand or number of coefficients. Now we can try to use the extrapolation. And this is everything, a post-processing data, so on. It is very, it is not expensive, no? It is almost cost zero, no? We move to here, 20,000. Okay. And this is extrapolation we can, we use. Now, I think for the spectrum, the new, the new block, we'll see the, the spectrum. Okay. And this is the spectrum with extrapolation. So you can see is, is totally different from before. Okay. This now turn here, but you can see the, the real difference from the, the last one. I didn't hear, but maybe in the, in the PDF, please. So you can see here the, the, the spectrum with extrapolation. So you can see here the, in red, the five hundred iteration with extrapolation, that I compute, and then 1000 iteration with extrapolation and 1500 iteration with extrapolation. And you can see the convergence of the spectrum. So extrapolation is a powerful tools to to arrive with the convergence spectrum without extract many, many, many coefficients from the turbo lung. Okay. I'm a little bit later. This, I, if there are some questions for this pattern before I leave it to Yuri, the, the second part. So there are some questions on Slack and on YouTube, but most of them are answered. I think there are no open questions. Yes. If someone has some questions, please, you can raise your hand. Okay. Maybe if not, we can move on. Okay. Thank you. And if you want to ask me to create for me and break out room. So maybe I can help the two doors too. Okay. Thank you very much, Oscar. Thank you, Oscar. Thank you very much. Hello again to everybody. Okay. So we did a half of the hands on. Now we are going to do the second half of the hands on. I show this slide from the lecture just to summarize the first half with Oscar. So now you just saw how to use turbo Lancers and turbo Davidson to compute optical absorption spectra in molecules using these two methods at the bottom. We will Lancers method and Cassida Davidson. In the second part. We will study solids instead of molecules. So no molecules any longer. So we study solids and we study different spectroscopy, which is electron energy loss. And we will do that using two methods. First one is Louisville Lancers again. So this is very powerful can be used for molecules solids and different spectroscopies. And second one, the Steinheimer method, which was was not discussed by Oscar so far. Okay. So let me switch to another slides. We have two remaining exercises. First one using the turbo yields code with the Lancers algorithm for silicon and then again turbo yields code with the Steinheimer algorithm. And again for silicon by the way Steinheimer Steinheimer algorithm was implemented by Oscar. Okay, this is the recap from the lecture. Maybe we can switch skip it in the interest of time. Just in in red box we highlight interaction terms, which are included. We don't do approximation. Independent particle approximation that Oscar showed for molecules so we directly go to the complicated, fully interacting case. So red term is included and green we highlight the perturbation, which is in this case is the beam of electrons. So we irradiate the chunk of silicon by electrons beam of electrons. While Oscar was discussing you take a molecule and you shed light on your molecule so it's different things. What we want to compute using the turbo yields code is chi q and omega, which is the susceptibility. Because when we compute chi, we take the imaginary part multiplied by this normalization factor. And we obtain the minus imaginary part of epsilon minus one, which is the inverse the electric function, which can be directly compared with what experimentalist measure. Okay, so we use a long says algorithm. The PW input for silicon, you are already experts with the input files for the PW dot X code. We will use a quite dense K point grade shifted 10 by 10 by 10 and 111 which means it's shifted. And after running the PW calculation, we run the turbo yields calculation. Let me describe immediately the input for the turbo yields code. This input looks quite similar to the one Oscar just showed from again using the turbo Lansos code. We have prefix output directory, which is the same as in PW restart keywords, the same, you just learn how to use them. Now in our control name list, there are a few things which are new. So we still have this eater max variable, which is the number of lungs iterations is before. So in we also do 500 iterations. There is a new keyword, which is called calculator. It can have two values. One is Lansos, which means we use Lansos algorithm. And second, Steinheimer, we use Steinheimer algorithm. So now we use Lansos and another three variables are Q one, Q two, Q three, which are the three components of the Q vector. I recall that the Q vector is the so-called transferred momentum because when electron is scattered in a chunk of silicon, this electron loses energy and it loses the momentum. So Q is the amount of the momentum which was lost due to the skate in elastic scattering. And we have three components in Cartesian coordinates in units to pi over eight. So let's discuss in more detail how to specify the Q vector. It's on the next slide. So Q vector bold Q has three components Q one, Q two, Q three, and it's in units of two pi divided by a zero, where a zero is the lattice parameter. In the case of silicon, this is controlled by cell DM one, which is 10.26 bore. For example, let's say we want to, I mean, we find some paper in the literature where some people computed electron-angelo spectrum for silicon. They show the spectrum and they write that modulus of Q is 0.53 bore to the power minus one. And they say that Q is along one, zero, zero direction in the brilliant zone. We say, okay, I want to do the same calculation. So how do I specify it in turbo yields code? So we do it the following way. We remember the formula on the top and we know that the modulus of Q is 0.53. Since it's a long one, zero, zero direction, we know that Q two is zero and Q three is zero. So only Q one is not zero. So we know the modulus of Q, we know a zero, and we can compute Q one, which is easy. So Q one is modulus of Q multiplied by a zero divided by two pi. If we do the mass, we obtain 0.866. This is why in the input, we have 0.866, zero and zero. Okay. And this is the third step. Okay, let me describe the third step and we do the calculation. The third step is the post-processing step using the turbo-spectrum.x program, which Oscar already discussed before. We have as usual prefix and output directory. We need to activate the keyword ills equals to true to tell to this program that we are using now turbo-ills code, not turbo-lenses. So the code will be aware that we are using ills spectroscopy. So it will do some read of temporary files properly. We have intermax zero and intermax that Oscar described. We can use the same extrapolation technique or not, as Oscar described. For the time being, let's say we don't use extrapolation. So intermax zero and intermax are 500. Why 500? Because in the turbo-ills, it was also 500. So we keep the same number. And then we have broadening, epsilon, 0.035 read-berg. Units can be different units, read-berg, electron volts. We use one which corresponds to electron volts for the energy spectrum plot. And we start for the energy end and increment step, also in electron volts. OK, so now let's do these three steps. So we go to example three. And we do calculations using cluster to make it faster. We type remote, MPI run, pw.x, read the input. OK, we use 20 cores. Let's monitor the execution of the program. OK, it's finished. It was fast. This is good. Now let's run the second step using the turbo-ills program. We type remote, MPI run, the name of the program, which is turbo underscore-ills.x, and read the input. Turbo-ills for silicon. And let's monitor the execution of the program. OK, it's running. It should take maybe a few minutes to wait. OK, let's make it do its job. While waiting, I hope I'm not too fast. OK, while waiting, let me comment on what we are going to obtain. So after these three steps, we will obtain a file containing information about inverse the electric function, epsilon to the power minus 1, at a fixed Q vector or Q point, because Q is fixed. And it will be the function of frequency omega. Of course, this is a complex object. It's not a real function. It's a complex function. So we can evaluate the real and imaginary part of this function using these formulas. And from the knowledge of the real and imaginary part of epsilon minus 1, we can obtain also real and imaginary part of epsilon. So epsilon minus 1 is the macroscopic inverse the electric function. So to compute epsilon, we just take 1 divided by epsilon minus 1. So these are macroscopic functions. And local field effects are included in the calculation. So instead of it was microscopic matrix. So 1 over epsilon minus 1 would mean that we are neglecting local field effects. But in this case, we are already computed fully epsilon minus 1 and it's macroscopic. Then we can divide 1 over epsilon minus 1. It gives us epsilon. So the file that we obtain from turbo yields will have information about real and imaginary of epsilon minus 1 and also real and imaginary parts of epsilon, which we can plot. Without extrapolation, we should obtain this figure. So on the y-axis, we have intensity of the spectrum, the yield spectrum. And on the x-axis, we have energy. And we see there are wiggles because the spectrum is not yet converged. So let's see how the calculation is running. And if we manage to obtain the same figure. It's running. It should take maybe five minutes. Let's wait. Meanwhile, we can answer some questions if there are any. You can also ask questions from previous exercises explained by Oscar. So now we will do just one calculation with 500 iterations. It takes some time. But since we don't have time to do convergence tests by increasing number of iterations to 1,000, 1,500, et cetera. So you can do that when we finish the hands-on. And if you have problems, you can ask questions on Slack. And as explained in the readme file for this example three, it's discussed what you have to do. So in the 0.5, it's written that you need to compute the yield spectrum without extrapolation. So extrapolation equals no for 500 up to 1,500 with a step 250 and compare them on one figure. And then do the same steps. But this time using extrapolation technique. So extrapolation equal OSC, which is by constant extrapolation. You can use the restart keyword as Oscar explained to speed up the convergence tests, not start from zero each time. And a second note that when you do these conversion studies, you need to change the name of the output file. So here it's written, change the name of the input file. But actually, you better not to do it because we use clusters. So there are scripts. So in order to avoid problems, don't change the name of the input but change the name of the output files. And then once you have these convergence tests, you are asked the question, which conclusion can you make by comparing the results? So this is what you can do when you have some time. OK. We can see that the calculation is finished. And now we need to do the third step. The post processing step using the Turbo Spectrum program. So let's do that. Type remote MPI run name of the program, which is Turbo Spectrum .x and read the input, which is Turbo Spectrum .silicon.in. OK. This should take just a few seconds because the post, if Turbo use takes, let's say five minutes, this takes five seconds. And indeed, it's finished. OK. Now it's time to copy files from the cluster to the local machine. But on the cluster, if we connect to the cluster, we see that the program produced temporary directory locally. I mean, in this example, we do it this way, which has many files which we don't want to copy. We have output files for Turbo Eels, Turbo Spectrum. But we are interested actually in this file which was generated, which we won't just copy this file because this file contains information about real and imaginary part of epsilon and epsilon minus one. So let's remember this name and copy this to our local machine. So we just type our sync from HPC and the name of the file, which is silicon.plot underscore eps.dat. OK. The file is here. Let's inspect it, open the file. And we see a lot of information which was computed by the Turbo Eels program. There are five columns. The first column is the H bar omega, the energy in electron volts. Second column is the real part of one over epsilon. Third column is the minus imaginary part of one over epsilon. And last two columns are real part and imaginary part of epsilon, which I just shown on the slide. So we want to plot the Eels spectrum. So we just need the first column and the third column. And this is done by the script, the glue plot script, which is plot spectrum. So you see this script plots the first column and the third column. So third column is a function of the first column. OK. So now it's time to execute it, glue plot, plot spectrum. This produces the file silicon spectrum EPS. We can visualize it. OK. So this is the spectrum. Electron angelo spectrum of bulk silicon at 500 iterations without extrapolation. If you have any problems obtaining this spectrum, please let us know. So this was done with 500 iterations and we need to do it with up to 1,500 in a step of 750 iterations. And I recall you should use a restart option in order to save time. But what you can see, interestingly, that the spectrum converges when you decrease the number of iterations. You see at 500 iterations there are unphysical installations. You cannot compare these to the experiment. But now when you have more and more iterations, it gets smoother and smoother. And at the end of the day with 1,500 iterations, you have a very nice, smooth, clean, ill spectrum of bulk silicon, which you can compare with experiments. And remember that this was computed at a fixed K point mesh, which is 10 by 10 by 10, shifted 1, 1, 1. So we need to also converge the spectrum with respect to the K points. This is one of the next tasks. Yes, as I said before, to speed up the convergence, as we know already, we can use extrapolation technique. In this case, we can extrapolate, let's say, up to 20,000 iterations. But you can take any other number, 30,000, 50,000. Of course, the more the better, but it doesn't really influence the cost, competition cost. Since calculation is very fast, it takes just a matter of seconds. You can use a large number, but of course, there is no point going to crazy numbers like a million or something. Or the order of tens of thousands should be sufficient. And as you can see, if we repeat the same step from 500 to 1,500, but this time with extrapolation to 20,000, and you see that nicely, already at 500 iterations, we are converged essentially, almost. It's almost identical to the black curve. So you can see really that in your real-life studies and applications, you should always use extrapolation technique because it allows you to compute spectrum at a lower computational cost. And you can verify that this is true by doing this exercise when you have some more time. Okay, this is just a comment that TurboEle's code gives you not only the spectrum, which is red, spectrum with the maximum corresponding to the plasmon. I remind that plasmon is the quasi-particle and plasmon are actually collective excitations of electrons. So when you excite your system by beam of electrons, the electrons of silicon are excited collectively. It's not their individual excitations, but they're really filling each other and excited collectively. And if you quantize this collective excitation, the quantum of this excitation is called plasmon. It's a quasi-particle. And on this figure, we plot also other quantities, like real part and imaginary part of epsilon. So the real part is green one. You see it can even have negative values. And imaginary part is this purple spectrum. So it is useful to plot also the real part of epsilon, why? Because this helps us to identify where is the plasmon. So the definition of the plasmon is that the peak in the red spectrum occurs at the energy where real part of epsilon is zero and the slope is positive. So we have zero, but we go from the negative value to the positive value. So this happens around 16EV. And the plasmon is around 21EV. So you see it's not really exactly the same energy. This is because of the broadening effects. But more or less, we can identify that actually this intense peak is the plasmon. And by the way, just to comment when there is also real epsilon is zero, but with a negative slope, it goes from positive to negative, you find peaks in imaginary epsilon. Okay. And now we have the last example. Now we have just little time left. So we have example four, which is the most complex example of the whole school. This is so because the implementation of the Stein-Heimer algorithm is very recent. As I said, Oscar implemented it. I think it was last year or one year or more. So this is a recent and of course it's not yet perfect. And there are optimizations that have to be done and many other improvements. This is a first version, but it's already great that we have this working because we can test the idea of Stein-Heimer method, which was discussed during the lecture. So even though it's slow, but we can see that this method gives exactly the same spectrum as Lansos. So two methods are very different, but they give exactly the same spectrum. This is very interesting and we want to check that. A few warnings, as I said, it's quite expensive. We need to use HPC. Don't even attempt to run it on the digital machine with two cores. It will not work. It will crash because of disk space, limit and memory and everything. And also we can try even to use parallelization of our key point pools with or without. Okay. So this is the input we do as usual. First, step one, PW calculations, step two, turbo yields calculation. Now we specify a calculator, Stein-Heimer, instead of Lansos. We don't have any longer keyword ether max, so we don't have those iterations, which were specific to Lansos, and we don't have restart option yet. So for the Stein-Heimer, we keep the Q, the coordinates of the Q vector. And then we also integrated here keywords to plot the spectrum, like broadening epsilon, units EV, starting and ending for the plot of the spectrum and increment. And we don't need to run turbo spectrum post-processing here. Since it's a different algorithm, we just do two steps. First, PW and second, turbo yields Stein-Heimer. Okay. Before you start running the calculation, you need to do some important notice. You need to do some preparatory step. We go to example four and let me find. So in the readme file, yeah, it's actually in the readme file of day six. It's not inside of example four, but it's general of day six. There is a note written here. You need to do git stash, git pool, git stash apply, then close the terminal and open a new terminal and start doing this exercise four. So you did already something perhaps, but just to make sure that you did really exactly these same steps. So you need to apply, download the latest version of the material. And importantly, you need to close the terminal because when you open a new terminal, it will apply the important changes which are done underneath. So this is important update. So once this is done, you're ready to go. In example four, we run the PW calculation, which is taking a few seconds. So I'm going to run the remote MPI run, pw.x minus input pw. Okay, let's check. Okay, it's finished. And the final step, submitting turbo use calculation with the Steinheimer algorithm. Remote MPI run, pw.x, and turbo use Steinheimer. Remote MPI run, or remote is q. Okay, so it's running. So note that this calculation will take two hours using 20 cores. As I told you, this is very expensive because the code is not yet optimal. So of course, this is in our plans to optimize it, make it much faster, make it much more efficient. But just as a proof of concept, we can use this implementation to compare with Lanses algorithm. So I don't recommend to use Steinheimer algorithm for production purposes. You should rather use Lanses algorithm with turbo use. If you face any issues, please post your questions on Slack. But if it doesn't work, don't worry because of course it's a very complicated example. But if you really want to make it successful and complete it successfully, then we are available on Slack to discuss. So it will take two hours, but let me show you the final steps and let me show you the final result. This is the ill spectrum of bulk silicon computed using Lanses, the blue line which we just discussed in example three. So this is the converged one, the blue one using 500 steps with extrapolation and red dots, red circles and the one which we're computing now using Steinheimer. But you can see that two spectra are really on top of each other. Lanses is more computationally efficient algorithm because as Oscar said, we need to do just one Lanses calculation. It doesn't depend on frequencies, so all frequencies are done in one shot. So we see we can have as accurate step as we want in the frequency. While with Steinheimer we need to do each calculation for each frequency. That's why we have the discrete number of points. We computed just 31 points because each point is a separate calculation and it's very expensive calculation. So if you want to have a denser spectrum like smaller distance between points so you have more calculations to do so more computational cost. While for Lanses, it doesn't matter. You can expand to 100 TV or as large as you want at no cost. Just, I mean you need to keep in mind that if you go high in energy you start touching excitations from the core levels. So if you want to have excitations from the core levels you need to make sure that your pseudo-potential has those levels included. Typically core level, I mean the idea of pseudo-potentials not to have core levels but I was referring more to semi-core levels like deeply lying the electrons maybe around minus 20 electron volts minus 30 up to 100. But if you are interested in the core level spectroscopy which is core level ills so this code is not appropriate because this is for valence loss spectroscopy. We are not studying here core level spectroscopy. Okay, and the last point convergence with respect to the k-point mesh you can do it either with Langstass or Steinheimer of course so better to go for Langstass because it's way more efficient and you see that if you use k-mesh 444 the spectrum is not smooth, not converged but with k-mesh 10 by 10 by 10 it's already smooth and converged that's why we used 10 by 10 by 10. And the very last, not very last but almost the very last point is the plasma dispersion. So using turboyl's code you can compute ills spectra corresponding to different wave vectors q. For example in this exercise we computed spectrum for a fixed q which is 0.5 dot something but you can do this calculation for different values of q vector for example from 0.1 up to 1.3 you can plot them on the same figure and you can see how it changes depending on the value of the transferred momentum q so you see for the small transferred momentum you have one sharp peak and then when q is increasing in modulus the peak shifts to higher energies and it broadens it broadens because the energy of the plasma because this peak is the plasma collective excitation so this energy is transferred so plasma enters in the electron hole continuum that is to say the energy of the plasma is transferred to to electron and hole so this is shown here if you take determined peaks of spectra from each spectra and you plot them the peak of each spectrum as a function of q you should obtain something like parabolic like and this is called plasma dispersion and as I discussed before so this parabolic like dispersion this is the plasma dispersion and the electron hole continuum is indicated by this dashed region so when this plasma dispersion touches and then enters continues with dashed line the energy of the plasma is lost it's transferred to the excitation of electron hole pairs and if you are interested more in this you can read about it in this book shown below for example and I think it's time to finish this hands on if you have any questions to any of for examples please ask now so I see one question in slack how does the X spectra package calculate XIS spectra with pseudo potentials so in quantum espresso there is another code which is called X spectra to study core level spectroscopy for example as I just mentioned here we are discussing about valence electron excitations but what if we are interested to study spectroscopy when we excite core electrons this is described by XAS X-ray absorption spectroscopy and there is a code in quantum espresso to do that we don't discuss it during this school so the idea in X-ray absorption spectroscopy for core electron excitations we still use pseudo potentials but in that spectroscopy what happens is that we are exciting core electron for example from 1s level which is very deep to the lowest unoccupied bands or to the Fermi level so when we excite it electron goes from 1s to the empty bands but it leaves behind a core hole on the 1s level so we need to take it into account in the calculation but there is a problem because we are using pseudo potentials and in pseudo potentials we don't have core levels so what to do so there are different ways how to compute XAS spectra the one which is used in quantum espresso is based on the following idea that we need to generate special pseudo potentials that take into account the presence of the core hole so for example if you take a chunk of nickel you have studied bulk nickel and you want to measure XAS spectrum in bulk nickel so if we have several nickels we construct a supercell and the core hole is localized on one of nickel atoms so what we do for all nickel atoms except one we use the usual pseudo potentials but for just one nickel where we want the core hole to sit we use a special pseudo potential which is generated with the core hole and with this we do DFT calculation and on top we use XPECTRA code to compute this spectroscopy okay next how different are there between chi.dat and ills.dat files resulting from turbo ills code yes I haven't discussed this let me go to example 3 and see example 3 turbo ills code generated to files silicon plot chi and silicon plot epsilon so as you can guess silicon plus epsilon contains all information about epsilon dielectric function and its inverse while silicon chi contains information about chi function open it we see first column is the energy second column is the real part of chi third column is the imaginary part of chi so what is this chi is the one is the susceptibility this one this is chi so as I said turbo ills computes chi and then in the post processing step we just apply this formula one plus four p e square divided by modulus of q times chi so plot chi contains information about chi and plot eps contains information about 1, 2, 3, 4 real imaginary part of epsilon epsilon minus 1 but typically we are not interested in chi by itself because we cannot compare chi directly to the what is measured in experiments we need to rescale it a bit to obtain epsilon minus 1 so that's why actually I didn't even mention about this file but just in case you should know there is another question on YouTube can the Lansos calculation be used for calculating the plasma spectrum of molecules how accurate is it the answer is yes you can try to use turbo ills for computing plasmons in molecules I have never worked on this but you can try of course or maybe even nanoparticles I think but for that it would be useful also to use probably turbo Lansos but I don't have experience with plasmons in molecules any other questions also to the part described by Oscar so what does the peak mean in the ill spectra of silicon so let's see so this peak means the red spectrum is the ill spectrum and the peak is the plasmon I explained before why this is plasmon because real part of epsilon is 0 not exactly the same energy but nearby so this spectrum can be compared with experimental spectra if you are interested in the field of plasmonics you can use this technology to determine plasmon spectra yes there is also quite recent activity on acoustic plasmons so which are plasmons which are at lower energies not around 20V but really around 0 which are they have acoustic behavior like acoustic phonons if you want so there are also acoustic plasmons which are in surfaces for example so they behave linearly they start from 0 I mean not the spectrum but the plasmon dispersion which is shown here so if you plot plasmon dispersion as a function of qt will be linear it will start from 0 and will be linear for some time so they are interesting for some technological applications like light harvesting or the like and also if you study surfaces there are also surface plasmons so you can also investigate them using the turboyl's code for example there are also surface plasmons like conventional surface plasmons or acoustic surface plasmons yeah there is a community working on plasmonics maybe it's not that large but in case some of you is interested probably this code can be useful for your research I think someone is typing and it's like you can also raise your hand information about the orbitals where the transition takes place so I guess this is about molecules correct? absorption spectroscopy in molecules so if Oscar wants to answer if not I can answer yeah with Kazida with the Davidson you can see the initial and beautiful orbitals with the launch of this is not possible sorry we use only the occupied space yes ok if there are no other questions maybe we can stop here you can your questions on Slack Yuri just one thing thank you really much and Oscar as well but this is your last lecture so you've been giving several lectures in this school and as usual those resulting very clear and then very well well organized so it's evident behind it so thanks Yuri thank you very much see you at some point looking forward really thank you ok so you can close the session here and we'll be back at 230 chest so Central European Summit time keynote of today Germany