 Thank you very much, Ivan, for all these details. So good morning to everyone. I'm Arsena Stropa, and that would be the moderator for this session. So it's a great pleasure today to have Stefano Baroni, that will give a talk about the density functional perturbation theory, phonons. So I leave the award to Stefano. Thank you. Good morning, everyone. Do you hear me fine? Yes. OK. Thank you for being with us once again. Let me start by trying to share my screen. Just a few seconds, hopefully. It's fine. I guess you see the first slide of my presentation, right? Yes, it's OK. Good. So today, I will try to guide you through the combination of density function perturbation for density function theory and quantum mechanical perturbation theory that allows us to compute the self-consistency, self-consistently response functions, phonons, and related quantities. What response functions have to do with phonons, hopefully I will be able to show you shortly. So let me start with a disclaimer. The material in this lecture is advanced, in some sense. And its proper understanding requires a background in quantum mechanics that not all of you are supposed to have. So at the very least, I would urge everyone to concentrate on the general concepts and terminology and to delay a detailed understanding of the theory behind to a later stage if you are not able to follow all the details. And you can ask questions to me or to other lecturers during the school, or better study, or at least have a read to the founding paper of this branch of density function theory, that is the Reviews of Modern Physics paper by Stefan of the Gironcholi and Redal Corso, Paolo Giannotti and myself that appeared 20 years back. So what response functions are all about? So computer simulation is about the computation of some physical properties of definite systems, such as, I don't know, the equilibrium volume of a system or the force acting on atoms. And these variables depend, of course, on the positions of the atoms that constitute the material or the molecule and also depend on the strength of an external field that you may let act on your system. For instance, the polarizability of a system or the electric constant is the derivative of the total dipole of the system with respect to the strength of an applied electric field. So in this case, the variable is the dipole, and the external perturbation is the electric field. The derivative of the variable with respect to the strength of the external field gives you a response function. Elastic constants are another example of a response function. Here what you measure is the dependence of the stress or the pressure exerted by the system against the application of an external strain. These electric constants are derivatives of the system's dipole with respect to an applied strain or deformation of the sample. And all of these properties are macroscopic in the sense that the perturbation acting on your system acts on the whole of the system. It's something that you apply at the macroscopic scale. You can also think of microstopic response functions where the perturbation is something defined at the atomic or molecular scale. For instance, you can ask yourself how the force acting on an individual atom depends on the displacement of any other atom. So the derivative of the force acting on one atom with respect to the position of other atoms is a response function, a microscopic response function, because the external perturbation, that is, the displacement of the atom can only be defined at the molecular scale. And this response function is known with the name of interatomic force constant. So an interatomic force constant is the derivative of the force acting on one atom with respect to the position of any other atom. You can also define born-effective charges. Born-effective charges. I don't know here. I must have made some unnoticed error. Are the derivative of the total dipole of the system with respect to the position of every atom in the system. And you can continue. According to the physical problem you want to address, you can define many, many different response functions. The main theoretical ingredient of response theory is the Helman-Thiemann theorem that many of you may have heard of. But according to my previous experience in this series of school, I remember that many of the attendees had never heard in the past about this very important fundamental theorem. And I would like to briefly summarize its statement and the very brief and elementary proof of it. So suppose you have a time-dependent Schrodinger equation that is an eigenvalue equation for the energy of our system that says that the Hamiltonian acting on a wave function returns the wave function itself for the very same wave function multiplied by a constant that is the energy of that particular energy level. The Helman-Thiemann theory addresses the following problem. Suppose that the Hamiltonian depends on some external parameters. For example, the Hamiltonian may depend on some external field and the parameter lambda may be the strength of that field. Or maybe in the Born-Oppenheimer approximation, the Hamiltonian depends on the position of all the atoms in your system. And in this case, lambda would be the collection of atomic coordinates. Whatever it is, you can always think in abstract terms of an operator, a quantum mechanical operator H, depending on a collection of parameters. These lambdas are not dynamical variables. They are parameters that you as a theorist should feel the freedom to set at your will and that conceptually an experimentalist should be able to tune in the laboratory. Then the Helman-Thiemann theorem addresses the following question. Suppose you can solve the Schrodinger equation. Suppose you have a means of solving this eigenvalue equation. How does the eigenvalue depends on the external parameter on which the Hamiltonian depends? Of course, the Hamiltonian depends on a parameter. And hence, you have a different Hamiltonian for different values of the parameter. Hence, the eigenvalues and eigenvectors of the Hamiltonian also depend on the parameter. Now, the question is, what is the derivative of the eigenvalue with respect to lambda? Because the eigenvalue can always be thought of as the expectation value of the Hamiltonian over the eigenvector, then the derivative of the eigenvalue is also equal to the derivative of the expectation value of the Hamiltonian over the eigenvector. Now, in order to compute this derivative, you just have to apply the chain rule. Here, this expectation value depends on lambda three times. You have a dependence of the bra, the dependence of the operator, the dependence of the ket. So you differentiate. You have first to differentiate the bra, then you differentiate the operator, and then you differentiate the ket. But now, you exploit the fact that psi is an eigenvalue of the Hamiltonian. So H psi is equal to E psi. Therefore, you can bring E out of this bracket here and this bracket there, and you end up with the expression that the derivative of the energy of the eigenvalue is equal to the expectation value of the derivative of the operator plus the eigenvalue, the expectation value of this scalar product. But that scalar product, what is it? It is the norm of the eigenvalue. The norm of eigenvalue is always 1. Hence, here you have the derivative of a constant and the derivative of a constant is 0. We conclude that the derivative of the eigenvalue is equal to the expectation value of the derivative of the operator. This is by no means a trivial thing because not only the operator in this expectation value depends on lambda, but also the bra and the ket, also the wave function over which we compute the expectation value, so that the fact that the derivative of the eigenvalue does not depend on the derivative of the eigenvectors is by no means trivial. This is a property that tells you that in order to compute the derivative of the energy, you don't have to compute the derivative of the eigenvector. So this is a very powerful principle which finds very important applications, for instance, in the computation of atomic forces. In order to compute the forces acting on an atom in a system, you don't have to compute the derivative of the concham orbitals. You only have to compute the expectation value of the derivative of the Hamiltonian. This is the normal, the usual demonstration of the Helman Feynman theorem that you find in elementary quantum mechanics and quantum chemistry books. But in a way, it overlooks its generality and it overlooks the possibility of employing this theorem, for instance, in density function theory, where energy, density function theory's energies are not expectation values of Hamiltonians. That's something more complicated that depends on the density and not directly, not graphically, on wave functions as ordinary quantum mechanics. Actually, the key to the theorem is the fact that quantum mechanics is based on a variational principle. And I'm going to show you why that the Helman Feynman theorem holds every time you can formulate a problem in terms of a variational principle, be it a quantum mechanical problem or a structural problem in engineering or a problem in electrostatics or in electromagnetism. Whenever you can formulate a problem in terms of a variational principle, you always have the possibility of stating in a form or the other the Helman Feynman theorem. Let me go fast over a demonstration. Suppose that you can define a function of an external parameter lambda as the minimum of a functional that depends on an internal variable x and lambda itself. For instance, the energy here is the minimum over psi of the Hamiltonian of this expectation value of the Hamiltonian that depends on lambda. So in this language, x plays the role of psi and lambda plays the role of the parameter upon which the Hamiltonians depend. So you have a function of two variables and you define a derived function as the value of the minimum of that function of two variables with respect to one of the two parameters, x in this case. The question is, how can you compute g prime? How can you compute the g prime of lambda that is equal to dg over d lambda? Now, by definition, the position of the minimum is, and hence, the value of x at which the function is minimum is found by solving the equation dg over dx equal to 0. Because g depends on lambda, of course, the position of the minimum also depends on lambda. So you have that the g of lambda is equal to the value of the function to be minimized, computed at lambda, and computed at that particular value of x that minimizes the g function. Now, let us compute the derivative of this. And by using the chain rule, you have that the g prime is equal to dg over dx times the derivative of x plus the partial derivative of g with respect to lambda computed at x equal x of lambda. But dg over dx at x of lambda is equal to 0. Hence, the derivative of the function of lambda is equal to the partial derivative only of g with respect to lambda. You have no contribution from the implicit dependence of the function to be minimized with respect to the x variable. Let us conclude and summarize. If you have an eigenvalue equation, the derivative of the eigenvalue is equal to the expectation value of the derivative of the operator. This is the usual, the normal statement of Helm and Feynman. But the generalization to any variational principle states that the derivative of the minimum of a function with respect to a variable x as a function of a parameter lambda is equal to the partial derivative of g with respect to lambda computed at the minimum with no contributions from dg over dx that would multiply dx over the lambda. The x over the lambda is something that, in general, must be computed. And the fact that you have not to compute it when you are based on a variational principle is a very powerful concept. Now, remember, we want to compute the sociabilities. Remember the definition of sociabilities. The definition of sociabilities is the value of an operator, the derivative of the value of an operator b with respect to the strength of a perturbation acting on the Hamiltonian. Let us formalize this. You have a Hamiltonian that depends parametrically on a parameter lambda. Alpha is what before I had called lambda. And a is the external perturbation, be it an electric field, a strain, whatever. An external perturbation that you let act on the system. And alpha is the strength of the perturbation. Then you ask yourself, what is the derivative of the expectation value of any operator? Let me call it to b with respect to the strength of the perturbation a. And we call this associability chi. And we use the suffixes b and a, which remind us that chi b a is the derivative of the expectation value of b with respect to the strength of the perturbation a. Now, I play a dirty trick based on Helman Feynman. Suppose that I figure out a different perturbation, not the physical perturbation. a is a physical perturbation. And b is an observable that I want to measure. Now, figure out a different Hamiltonian where the role of the perturbation is played not by the physical perturbation but by the operator that I want to measure. And I call this H beta. That is equal to the same perturbed Hamiltonian plus beta the operator I want to measure. Because of Helman Feynman, I know that the expectation value of b is equal to the derivative of e with respect to beta, the derivative of the eigenvalue of this Hamiltonian with respect to beta. But now, if I have a Hamiltonian that depends both on the physical perturbation a and on the fictitious perturbation b, that is the quantity I want to measure, then I have very trivially that chi a b is equal to d over d alpha. Sorry, d over d alpha. Yeah, d, sorry, b alpha. But b is equal to b is equal to d h d e over d beta. Hence, this susceptibility is the second derivative of the eigenvalue of the Hamiltonian when we let the Hamiltonian depend on two perturbations at a time, the physical one a and the fictitious one b, where b is the quantity we want to measure. So let me generalize this a little bit and consider a Hamiltonian that depends on an arbitrary number of perturbations on a linear combination of different perturbations whose strength I call lambda i, where i goes from 1 to n, the number of external perturbations. Now, I can think of considering the energy of the system as a function of the lambda, for sure it is, because h depends on lambda and its eigenvalues certainly depend on lambda. So I can write a series expansion, a Taylor expansion of the energy in powers of lambda. The first derivative, the linear terms are minus some generalized forces. I define f of i as minus d e over d lambda i. I define them with a minus. I could have defined them with a plus sign. But in order to call them forces, when the lambdas are atomic positions, it is handy to think of the f as forces and forces by definition have a minus sign. So you have a linear term whose coefficients are forces, or generalized forces, and you have quadratic terms plus higher order terms. So the forces are used in molecular simulations for doing structural optimization and molecular dynamics. These second derivatives, we just saw, are response functions. Our static response functions, including elastic constants, the electric tensor, the piezoelectric tensor, and so forth and so on. And the microscopic response functions, such as interatomic force constants, borne effective charges, et cetera. So let us try to give a name to these first and second derivatives using perturbation theory. How should we proceed? It is simple. You go to page 33 of your favorite textbook in quantum mechanics and compute first order, second order, higher order terms in perturbation theory. If you do so, of course, the linear terms will be linear combinations of the lambdas. And if you do so, you will realize that the forces, the FIs, are nothing but the expectation values of each individual perturbation over the ground state. Because if VI, sorry, if VI is a local potential, then a local potential only couples to the charge density distribution. And the fourth is nothing but the integral of, probably, there is a minus sign here. Probably, there is a minus sign here. It is the integral of each minus the integral of each perturbing potential times the charge of the density distribution. You see here the power of Helman and Feynman. You don't have to know the derivative of the wave function or the derivative of the density in order to compute forces. You continue your exercise, and you do second-order perturbation theory. You may remember that second-order perturbation theory implies that there is some overexcited states. You do the exercise. I don't have the time to do it. But it's pretty simple. It's pretty simple. You do the exercise. You expand the second-order correction to the energy in a quadratic form in the lambdas. And the matrix of coefficients of the quadratic form is this sum over excited states of matrix elements of the perturbations, the i-th and j-th perturbation, computed between the ground state and the excited state. Now, this is second-order perturbation theory on the energy at page 33 of your favorite quantum mechanics textbook. But if you go to page 32, just one page before, you would find the first-order perturbation theory in the wave function. And you will realize that this object here, this sum over excited states of this linear combination of excited states, is nothing but the first-order correction to the wave function. And a little exercise shows you that this is nothing but the integral of this potential here times the derivative of the density with respect to the strength of the j-th perturbation. So Hij, this response function, is the integral of the i-th perturbation times the response to the j. You have played this game by identifying the response to the j-th perturbation. But you could as well have played the same game with the i-th perturbation. And you would end up with an alternate expression for the second-order derivative as the integral of the j-th perturbation times the derivative of the density, times the response of the density to the i-th. This I will skip. So let us frame these same results in terms of density function theory. So you have your external potential acting on the system that is equal to an unperturbed potential v0 plus a linear combination of perturbations, whose strength individually I call lambda i. Now, in density function theory, the energy of the system is equal to the minimum over all the possible densities of a universal functional f of n that does not depend on the external perturbation plus the coupling of the external potential with the ground state electronic density under the constraint that the electronic density is normalized to the number of electrons n. Now, what does Helman and Feynman say? This is exactly the form of the Helman-Feynman theorem that I stated before for a general functional. So you have a functional, you have a function of lambda that is expressed as the minimum over all the possible n's. This n here is what in the previous slide I called x. x is the internal variable with respect to which the function has to be minimal. x may be just a single variable. And in this case, this would be the minimization of an ordinary function if n is the internal variable. n is a function as it is in this case because the density is the function of position. Then the quantity to be minimized is a functional and not an ordinary function, but the concept is essentially the same. Helman and Feynman says that the derivative of E with respect to lambda is equal to the derivative of the functional to be minimized, computed at minimum, at the extremum. But the functional to be minimized does not depend. This universal functional, F, does not depend on lambda. The dependence on lambda of the functional to be minimized is only in the coupling to the external potential. So the derivative of the functional with respect to lambda, computed at minimum, is equal to the derivative of v with respect to lambda. But the derivative of v with respect to lambda i is vi. So the derivative of the functional with respect to lambda is the integral of vi times the ground state, the ground state electron density distribution, computed at the specific value of lambda that appears as the argument of the energy. For instance, if I wanted to compute the derivative of the energy with respect to atomic positions, which are atomic forces, I will have to compute the integral of the ground state density n of lambda at the given atomic positions times the derivative of the external potential with respect to atomic positions, which is this vi. By pushing the differentiation one order higher, if I differentiate this with respect to lambda j, I arrive at the expression that the second derivative of the density with respect to lambda i and lambda j is equal to the derivative of this right hand side, but the vi does not depend on lambda. So the only thing that I can differentiate in this integral is the ground state charge density distribution. So the second order derivative is the integral of the external of the i-th perturbation with respect to the density response to the j, which is what we got one slide or two slides back from elementary quantum mechanics. No surprise, of course, if density function theory gave a different result with respect to ordinary quantum mechanics, something would have been wrong. We are pleased that there is no evident error and you can move on with the take-home message that the second derivatives of the energy with respect to the strength of some external perturbation are given by the integral of the j-th perturbation times the density response to the i-th. Or because the second derivative is symmetric in i and j, I could as well have said that the second derivative is the integral of the j-th perturbation with respect to the density response to the i-th. So in order to compute these second derivatives, we have to compute the response of the ground state charge density distribution. In density function theory, the charge density distribution, as you all know very well, is the sum over all the occupied states, all the cone-sharp states of the square moduli of the occupied of the molecular orbitals or block states. So if you differentiate this expression with respect to whatever parameter you have, you have that phi v squared is phi v star. And if you differentiate this product, you end up with the expression that the derivative of the ground state density is twice the real part of the product over the occupied state of phi star phi prime. In ordinary molecular computations, the molecular orbitals are usually real, so you can forget about this complex conjugation. In an extended system, you can always choose them to be real because of time reversal invariance. But as a matter of fact, it is convenient to choose them of the block form. Block states are not real, and you have to care about this complex conjugation in your algebra. Now you use a perturbation theory, and you express this derivative of cone-sharp orbitals. You go to the same page 32 of your favorite quantum mechanics textbooks that you had browsed before, and you find an expression for the correction to the v-th occupied orbital. And here, the algebra is a little bit tricky because in your quantum mechanics textbook, you would find that this sum extended to all the states and different from v, different from the state being perturbed. But then you break this sum as a sum over occupied states v prime different from v plus the sum over empty states. In what follows, I will very often call occupied states with the index v, like valence state, and the empty states with the index c, like conduction states. And if you do this distinction between sum over occupied and unoccupied states, I won't do that now, but it's a simple exercise to show that the contribution to the charge density response of this sum cancels. It cancels essentially because here this denominator is odd in the exchange of v and v prime, whereas the numerator is even in the exchange of v and v prime. So the pair vv prime, the contribution of the pair from the pair vv prime cancels the contribution from the pair v prime v. And you remain with the sum over empty states only. This is important for practical applications. And this expression for the response of quantum orbitals is what you find in quantum mechanics textbooks. But it has the drawback of the disadvantage of requiring the sum over all the empty states. In the plane waves applications, you have thousands or hundreds of thousands or even millions of unoccupied states. And it would be totally impractical to sum over all of them. And a key ingredient of density function perturbation theory is to express this concham orbital response, these corrections of concham orbitals to first order perturbation theory, not in terms of sum over empty states, but by solving a linear system, a linear system that is like a inhomogeneous Schrodinger equation. Look at this. If you did not have a right-hand side here, this would be the Schrodinger equation. This would be the concham equation. It's not minus in energy phi equal to 0. This is a concham equation. This is like a concham equation, but with a right-hand side that is different from 0. This PC here is a crucial ingredient of density function perturbation theory. This is a projector over empty states. This is the sum over all the empty states of phi c phi c. The crucial point here, this requires some algebra to demonstrate. I won't do this. It is a simple exercise to show that these objects here is solution of this equation here. You have just to take this, multiply by H0 minus epsilon and you will see that the application of this operator to this function gives this right-hand side. The crucial thing to notice here is that in order to compute the projector over empty states, you do not need to calculate any empty states because the completeness of the plane wave-based set says that the PC plus PV, PC plus PV equal to 1, where PV is the projector over occupied states. And these occupied states, you always have to compute anyhow. When you do a ground state computation, when you do a PW dot x calculation, PW dot x always returns the quantum orbitals, always returns phi 0. So once you have phi 0, you have PV. And once you have PV, PC is simply 1 minus PV. So in order to compute this projector here, you don't have to compute any empty states. You only need the ingredients that are provided by PW dot x. And here, all the ingredients are given by PW. This is the self-consistent Hamiltonian that depends on the ground state, the self-consistent potential given by PW. This is the quantum orbital given by PW. PC is 1 minus PV and PV we saw is given by PW. Phi 0 is given by PW again. And V prime is the external perturbation. So it's something that you know beforehand. So in order to compute the response of the conch arm orbitals to an external perturbation, you don't necessarily have to apply the formula that you find at page 32 of your quantum mechanics textbooks that implies that some of our unoccupied states, you can map this onto the solution of a inhomogeneous Schrodinger equation or Kohn-Scharm equation that can be accomplished basically with the same algorithms that are used, iterative algorithms that are used to diagonalize the Kohn-Scharm Hamiltonian to find the unperturbed conch arm orbitals in the first place. So the take-home message is response functions require the computation of the response function requires the computation of the density response to an external perturbation. In density function theory, the density response is expressed in terms of the response of individual Kohn-Scharm orbitals. And the response of individual Kohn-Scharm orbitals can be computed by solving a inhomogeneous Schrodinger equation. So let us summarize and let us establish a parallel between density function theory and density function perturbation theory. So density function theory is a method that allows you to establish a b-univocal correspondence between an external potential acting on the electrons of your system and the ground state charge density distribution of the system itself. How you do that? You basically express the density. Let me renumber these three steps. So you express the density as the sum of the square moduli of the Kohn-Scharm orbitals. The Kohn-Scharm orbitals are solutions of a self-consistent Schrodinger equation. They satisfy a one-body Schrodinger equation in an external field that is given by the external field that is something determined physically by the physical conditions of your system plus an internal self-consistent field that is Hartree plus exchange correlation. By iterating these three steps, you establish a self-consistent correlation between external potential and density. This is well-known by now, well-established by now. In density function perturbation theory, what you do is simply linearize each of these three steps basically you replace the external potential by the variation of the external potential and the ground state density with the response of the ground state density to the variation of the external potential. And then you have to linearize these three equations. The first one is trivially linearized. The second one, the first is the dependence of the self-consistent potential on the density. And it's trivially linearized. The second is the dependence on the density response on the orbital response. And this is also trivially linearized. The third slightly less trivial ingredient is the linearization of the Koncham Hamiltonian. It is possible that I missed a minus sign here. I don't remember exactly. Maybe there is a minus sign here. I don't remember quite well. And basically once you have linearized, there's three equations of density function theory. Basically you have to solve self-consistently the set of three linearized equations. And this is what the linear response components of quantum espresso do. Done. So what has this to do with atomic vibrations and follows? So basically you can think of a perfect crystal depicted here with blue dots described by an external potential that describes perfectly a periodic system at equilibrium. And then you ask yourself what is the response of the system to the displacement of individual atoms. So you have many atoms. And you can consider a generic linear combination of displacements of each one of them. And to linear order, so to lowest order in perturbation theory, of course the perturbing potential is linear in the amplitude of the atomic distortions. And you ask yourself what is the energy, what is the dependence of the energy on the amplitude of these distortions. Of course, you can do a Taylor expansion of this. And the Taylor expansion in principle would start with first order derivatives. So here you should have a linear term that would go like sum over r dE over d u of r times u of r. This would be the linear term in this Taylor expansion. But the derivative of the energy with respect to atomic positions is the force acting on the atom being displaced, or minus the force if you wish. If you suppose to start your computation from equilibrium, equilibrium means vanishing forces. So if you do the calculation from equilibrium, if you have pre-optimized your structure such that the forces acting on individual atoms vanish, then by definition, by construction, the linear term in this Taylor expansion would vanish as well, and the Taylor expansion would start with second order terms. The second order terms are second derivatives of the energy with respect to atomic displacements, which the second derivatives are the first derivatives of first derivatives. But the first derivatives are forces. So these second order derivatives of the energy are first order derivatives of the forces, which are called in the jargon of Latin dynamics, they are called interatomic force constants. The eigenvalues of these matrix, these interatomic force constants are 3n by 3n matrices, where n is the number of electrons, and you have the extra factor of 3 because the displacements are vector, so you can displace each vector in three Cartesian directions. So this matrix of interatomic force constants is a 3n by 3n matrix. Whose eigenvalues are the square of the vibrational frequencies? This eigenvalue problem is a little bit weird because you have this extra factor here, these masses here. I want to comment where the masses come from. If all the masses are equal, this is just a matter of units. If different atoms have different masses, then you have to take care of the differences between amongst the different masses, and you have to go to your favorite classical mechanics textbook, always at page 33, you will find the chapter on small vibrations. And there you will find an explanation where these masses in the eigenvalue equation come from. Thing to be noticed is that these eigenvalues here are indicated as the square of a real number, so they have to be positive. So positivity is an expression of the fact that when you minimize the system, you are at the minimum. So wherever you move off the extremum, the energy always increases. If by any chance you find that the square of vibrational frequencies or alleged vibrational frequencies is negative, this means that what you thought to be a minimum is not a minimum is a subtle point. This means that by moving in some given direction, the energy would decrease rather than increases, as an increasing. And this is a signal of a possible mechanical instability. So it may be a signal of two different things. Either you get negative square frequencies because there is something wrong in your computation and you have to fix it. Or if after the best of your efforts, the square frequency remains negative, this is an indication of a mechanical instability. And maybe you have made bingo. Maybe you have found some interesting physical effect. And the stable structure of your system is not what you thought or what your supervisor thought or you may have found some interesting physical effect. So a system, a crystal is made of an infinite number of systems. And so in principle, you have an infinite number of atoms. And this is the matrix of interatomic force constants in principle is of infinite rack of infinite size. In practice, you can exploit symmetry, translation of symmetry to reduce. And so in principle, if you wanted to compute the phonons or the variational frequencies of your system, you have to take big supercells. And the dimension of your dynamical matrix of your matrix of interatomic force constants would be three times the number of atoms in your supercell. But if you want to accommodate phonons of arbitrary wavelengths of wavelengths of any magnitude, then you have to be prepared to do supercell calculations of arbitrary size, which is not exactly what you usually want to do because you want to keep your computation as lean and as fast as possible. In this case, symmetry comes to the rescue. And if you go to page 33 of your elementary solid state textbook where phonons, artist vibrations are explained, you will see that the phonons can be classified in terms of their wave number. And so phonons of different wavelengths do not interact with each other. And it must be possible to compute phonons of different wavelengths without considering the interaction with the phonons of other wavelengths. Let us see how this reflects on the FPT, on your equations from density function perturbation theory. Suppose you have an external perturbation of wavelength lambda, which is 2 pi over the wave number q. q is the quasi-momentum of your phonon. Then the right-hand side of your FPT equation, remember, is the projection over conduction states times an unperturbed orbital times the perturbation. The unperturbed orbital, because of Mr. Bloch, has a definite wave vector k. But because the perturbation has wave vector q, the product of v prime k has wave vector k plus q. pc is the projector over empty states and over empty states of all the possible wave numbers. So this does not change the wave number. All in all, the right-hand side of the DFPT equations have a wave vector k plus q. I insert this right-hand side of wave vector k plus q in the inhomogeneous trading equation. But because the Hamiltonian here on the left-hand side is periodic, this solution here must have the same wave vector as the right-hand side. Because the right-hand side is equal to the inverse of this operator here acting on the right-hand side. But this operator is lattice periodic, and so is its inverse. Therefore, phi prime, the perturbed quantum orbital must have the same wave vector as the right-hand side. So it must be of wave vector k plus q. So the right-hand side, because it has a wave vector k plus q, this must be of the form e to the i k plus q times something periodic. Same on the right, on the left-hand side. So what seems to be an equation for a wave function extending all over the orbital actually can be mapped onto an equation for a periodic system, for a periodic wave function, for the periodic part of the perturbed orbital. By the same token, you can demonstrate by combining the response of wave of orbitals of different wave vector that the density response has the same wave vector as the perturbation. And the perturbing self-consistent potential is also has also the same wave vector as the density response, which in turn we saw has the same wave vector as the external perturbation. So all the computation can be mapped onto a computation for the periodic part of the wave functions and the response wave functions, which means that the computational complexity of your system is associated to a number of electrons, which is not the number of electron of your crystal, that is infinite, or of any supercell, which is the larger the wavelength. But for any wavelength, the numerical complexity is the same as an unperturbed computation for a computation for an unperturbed ground state. So in polar materials, you have complications due to the fact that the displacement of atoms generates microscopic electric fields. So in order to cope with the coupling between microscopic electric fields and phonons, let me express the energy of the system in terms of the phonon amplitude U and the strength of the macroscopic field E. So the appropriate thermodynamic potential depending on lattice distortion and internal electric field has this form. It has a term that is quadratic in the lattice distortion, a term that is quadratic in the electric field. And the coupling term between distortion electric field, whose strength is there's a born-effective changes that I mentioned before. The z star is the derivative of the polarization of the sample with respect to the magnitude of the distortion. Now, the forces acting on the atoms is given by the derivative of the energy with respect to displacement. You have a contribution from this harmonic term, but you also have a contribution from the coupling with the external field. By the same token, you can define the electric induction of the system as the derivative of the energy with respect to the strength of the field. This is equal to the classical electrostatic expression that is the electric constant times the external field, the internal field. But you also have a coupling with the internal degrees of freedom, the internal distortion. Now, perturbations are never lattice periodic. They always have a finite wave, a finite wave number. And if you express the curl of the electric field in reciprocal space, then the curl of a field of wave number q is equal in reciprocal space to the cross product between q and d. And in the absence of external charges and currents, this is equal to 0. Therefore, when the polarization of the phonon is perpendicular to the wave vector of E, then the only way this Maxwell equation can be satisfied is by letting the total electric field be equal to 0. If the other Maxwell equation and therefore the force acting on the atoms does not have for these transverse phonons, does not have any contribution from the electric field here. So for transverse phonons, the frequency is simply, the square frequency is simply omega squared. But if q is parallel to d, then the Maxwell equation says that d has to be equal to 0. And if I put d equal to 0 in this equation here, I find that e is different from 0. And hence, you have an extra term, an extra linear dependence between force and displacement that modifies the phonon frequency. As a consequence of this, in polar materials, you usually have phonon dispersions that at zone center at q equal to 0 are split with the longitudinal phonon having a larger frequency than the transverse phonon. I think it's getting late. And I will skip how density function perturbation theory can compute the response to macroscopic electric fields. You will see from the previous consideration that in order to compute the frequencies of longitudinal phonons, you need to have an estimate of the electric constant and of the effective charge to compute which you have to be able to compute the response of the system to a macroscopic electric field. This is something subtle that I won't comment upon because of lack of time. But let me just state that with some extra pain and some extra technical ingredients, quantum espresso is able to compute the response of the system to a macroscopic electric field. Let me just comment on where the difficulties come from. A macroscopic electric field is described by a potential that grows linearly with a position. So if you have that v is equal to minus e dot x. But the potential that corresponds to a homogeneous electric field makes the Hamiltonian non-periodic because x is not periodic and also not bound from below. This generates all kinds of difficulties that conceptually can be solved by seeing this macroscopic electric field as the limit when the wavelength goes to infinity of a perturbation of finite wavelength. So for any finite wavelength, you can compute the response to the external perturbation in the way I described before. The nice thing of densifunctional perturbation theory is that it allows to take the q equal the q going to zero limit or equivalently the wavelength going to infinity limit analytically. So you can take the limit before doing the computation. And this is done with a number of tricks implemented in the codes. And the algebra and the idea are explained in that reviews of modern physics paper that I quoted before. Before concluding, let me mention the last technical ingredient that is used to compute smooth following dispersions. So we saw that at any given wavelength, at any given wave vector, a density function perturbation theory calculation gives you the frequencies at that specific wavelength or wave vector without having interaction amongst different wave factors. But you want to draw a smooth curve for following dispersions. And what you do is compute the matrix of interatomic force constants in real space. What you do in practice, you compute these dynamical matrices here. And you compute them wave number by wave number. Then by doing a Fourier transform, you end up with a function of the distances of the atoms that interact with each other. And the smoother is the dependence of the dynamical matrix on the wave number, the shorter the range of these interatomic force constants. So when following dispersions are smooth, you need the few interatomic force constants to interpolate between them. Difficulties arise when you have a polar material because this matrix of interatomic force constants is singular in the long wavelength limit as a consequence of the L-O-T-O splitting, of the splitting between longitudinal and transverse modes. What you do in practice, you subtract the known analytical behavior of the dynamical matrix in the long wavelength limit. You remove, hence, the singularity and you interpolate with Fourier analysis. The dynamical matrices computed on a quartz grid of Q points in reciprocal space. So let me summarize. The main features of density function perturbation theory response functions can be calculated in terms of response orbitals. This is achieved by solving a linear system that only requires a Hamiltonian build. Only requires the application of the con-sharm Hamiltonian on the perturbed con-sharm Hamiltonian over an perturbed orbitals without ever calculating any conduction states. We can compute the response to the perturbation you are interested in. You don't need the general response functions such as the electric matrices and the like. You can easily treat non-local perturbations such as nuclear displacement. When you displace ions, you displace a non-local potential. And so the non-local perturbation is non-local. You can treat non-periodic perturbations and you can easily treat macroscopic electric fields. This theory was developed in the late 80s and first applied to the computation of the piezoelectric properties of 3.5 semiconductors by Professor Deji Roncoli in his Master of Science thesis back in 88 or 89. Then a number of applications to phonon dispersions appeared soon after. Deji Roncoli, again in the mid 90s, developed a generalization of this theory to metals. So conceptuality is similar, but there are a number of technical difficulties related to the fact that you have to integrate only over states below the Fermi surface. And the number of applications have been done over the years on the electric properties, piezoelectric, elastic properties, phonons in crystals and alloys, and all kinds of vibrational spectroscopies, including phonon lifetimes, hot softening and structural phase transitions, electron phonon interactions, and superconductivity, thermal expansion, and thermal properties, and mechanical properties to find a temperature such as thermal elasticity, and many other applications. Just an excerpt from more recent literature, this theory can now routinely be applied to systems of several loads or a few hundred atoms, possibly mimicking the disordered systems. It can compute vibrations at surfaces, for instance, the vibrational frequencies of small molecular adsorbates on surfaces that can be used for characterization purposes. And again, the thermal properties of matter at extreme pressure and temperature conditions such as those occurring in the interior of planets, superconductivity, and many other properties. And works come out at the rate of several hundred per year using this technology. So my work, as well as the work of all the Quantum Express team, is funded by the Quantum Express Foundation and the Max European Center of Excellence for supercomputing applications. But I wish I could thank you for their support. And this is all for today. Thank you very much, guys. Thank you, Stefano. Thank you very much for this very nice talk with all of you. It's a bitters about the theory and also the applications. Are there any questions from the audience in Zoom? If you have some questions, please raise your hands. I don't see any. Actually, one question on myself may ask in performance, you apply these harmonic approximations. So can you extend? Is it possible to extend these techniques, including non-harmonic terms in expansions? Is there any development of view? There are several answers to this. Important and interesting question. The most trivial one is to push a perturbation theory to higher order. And this is what is being done for computing phonon lifestyles, for instance. And the extension of perturbation theory to third order is almost trivial. It does not require any extra heavy computation. And this is what I skipped. One of the two three slides that I skipped at the beginning of my talk was exactly on how to extend perturbation theory to third order using the so-called 2n plus 1 theorem. In a nutshell, what this theorem says is that the density or wave function responds up to order n determines, univocally, the energy response up to order 2n plus 1. So if you do first order perturbation theory, this is n equal to 1. And 2n plus 1 would be equal to 3. Which means that the first order perturbation theory gives easy and cheap access to third order corrections to the energy from which you can compute non-linear susceptibilities, quadratic susceptibilities, and in the case of phonon lifestyles, to lower order in perturbation theory. The other more general and no perturbative answer will be given by, likely, by Francesco Mauri in his invited talk next week. Basically, what Francesco has done is develop a generalization of the old concept of self-consistent phonons. Where in self-consistent phonons, basically what you do, you treat the inharmonic terms in the interaction in a kind of mean field way, in the same way as electro-electro-interaction is treated in heart-to-heart talk or similar thing. So treating this in a self-consistent way allows you to incorporate unharmonic effects into an effective quadratic Hamiltonian, into an effective harmonic Hamiltonian. And Francesco has devised a very smart way of doing this in a numerically effective fashion. And we will hear about this next week. Thank you for pointing out. So there is a question from Uday Gayeira. Sink, you can show yourself and. Yes, go ahead. Can you see me? OK, thank you. Thank you very much for the informative talk. So I have a very small question. I mean, the thing is that when you mentioned that we don't need to derive the wave function of charge density, so we don't need derivation of the, if you want to calculate the forces, right? But when we choose Helman-Feyman theorem, and aren't we using the derivation of this lambda function? So lambda is already including the wave function and charge density, right? Or not? What do you mean? Lambda is not a function. It's a parameter. Yes, lambda. So when you mentioned that G of lambda, you said that G of x and lambda is, I mean, very much similar to the psi and Hamiltonian. So the correspondence that I wanted to highlight is let me try to. So suppose you have a, I mean, let us fix the ideas, and let us replace lambdas that may be confusing with atomic positions that are R, OK? And you have an energy that depends on atomic position that is equal to the expectation value of the Hamiltonian that depends on these parameters. R is equal to lambda. Lambda is equal to the set of all the possible atomic positions. So E of R is equal to the expectation value of the Hamiltonian that depends on R over the wave functions that also depend on R, right? Yes. And so this is, and psi is x and R is lambda. X of lambda means the value of x that minimizes G of x and lambda at given lambda. That is the value of x that solves the equation dG of x and lambda over the x equal to 0. This equation has a solution that depends on lambda. And that I call psi of lambda. This is analogous. This is exactly the same thing as in quantum mechanics. dG over dx equal to lambda equal to 0 is the same thing as H of lambda psi of lambda equal to E of lambda psi of lambda. So psi of lambda is the value of psi or the value of x that solves the variational problem at given value of lambda or a given value of the atomic positions in the case where lambda are atomic positions. Have I clarified a little bit the issue? I don't hear you. You die. You are muted. Sorry, I was muted by a system, maybe. That's why. Yeah, it was pretty clear. Because before that, I was confused by the size of this different. Yeah, yeah. Is it clear now? Yes, yes, yes. It's very good. OK, thank you for the question and for the comment, Stefano. I think we need to stop because we have the group photo. Eventually, there are some questions on the streaming. Maybe you may have a look later after this group photo. OK. So, Ivan. Yes, much to be said. Maybe we have to stop the streaming, right? Yes, I'm here. So, where do I find the questions in the Zoom chat or in all the black? You can check the streaming on YouTube. There is a chat. I'm sending you the links, Stefano. I'm sending you the link. OK, so it's time to turn on the camera. We're sending the link on email. How are you? I'm sending the link on here. OK. OK. As Ivan asked, please switch on all your cameras. No, no, I am there. I am there. OK. Are you able to edit directly on the chat? You need to log in. You are not here? What? I'm not. No, you are not here. Hey, Stefano, can you write on the chat directly or? Oh, nice. Finally, we can all see each other. Nice. Lower all the hands. OK. Massimo, where do I find this 49-25 setting? On the video settings. OK, I'll have a look. Just go on the bottom of the menu and you will find it. Yes, the problem is how do I address? Oh, yeah, I found it. How do I address individual questions? Are you just normally we put? Is there a reply button? No, Stefano, normally we put at the name of the person and then you answer. So I think there are missing some persons to open the camera, but most of you is open so we can start keeping the photo. So keep smiling. And we start to take some screen shot. OK, keep smiling. Oh, even Stefan joined us. Hi, Stefan, welcome. Keep smiling. Stefan, we are taking group photo. Stefan, where are you? So keep smiling. I'll do another. OK, maybe this is the last. Keep smiling. OK, I think we are done. Nice. Thank you. Thank you very much. OK, anyway, if it didn't work, I mean, it's not a big deal to do it again, I think. Yeah. Next week. OK, so we take a break and then we start with the end zone. OK, so let's thank Stefano again. Thank you, Stefano. See you in a few minutes. See you later. See you later. Ivan? Si, Stefano, say my con, eh, di me. Ma, eh, capisco bene che c'è un limite di 200 carattri nelle risposte, eh? Dove, su la chat di YouTube. Si, possibile, sì, sì, sì, sì, c'è. Tocca di vide l'anno. Eh, questo è circa impossibile, mi risponde. Circa impossibile. Per il limite, perché la domanda è impossibile. Ah, ti sei messo muto. Mi ho messo glimmo, eh? Grazie. Ah, eh, eh, eh, eh, eh, eh, eh, eh, eh, eh, eh, eh, eh. Capito? Cosa? Su YouTube è per me impossibile rispondere, cioè non ce la faccio. Eh, mi fa bene che qualcuno le rega rispondo a voce, però, spondericci vorrai venti minuti, se da copiare e incollare di continuo, non ce seguirei. Va bene, vai su Slack, vai su Slack. Ma comunque anche più tardi mi chiedi di far l'ora, cioè. Non eh, non eh, non eh, eh... No, ma comunque, ma comunque è un... No, è più difficile. Di più tardi a parte che non c'ho tempo, perché... E' più limitato. Sì, ma questo mattina si, ma nel pomeriggio non c'ho tempo. Sì sì, ma è limitato sicuramente. Il suo Celui Start? Yes, maybe we take a little bit more, just because we finish later. Let's start at 35. OK. Meanwhile, if you wish, we can try to do a technical check just to see if the screen sharing is working. Yes, first of all, do you hear me loud and clear? Yes, very well. OK. And now do you see the screen of the Vipro machine with the presentation? Yes, very well. OK. Fine. It seems fine. Actually, if you leave it there, it's better for us because this is reflected in the screen, in the streaming. So if you leave it there, it's better. Thanks. Sure. Andrea, actually, I have to make an announcement before we start. Yeah. I can already make the announcement since there are already 100 people connected. So as we have been repeatedly mentioning during this first week, there are HPC resources available for the school that you are able to access with this remote command that we have been showing during a number of this end zone session during this week. So in particular during this week, most of the lab session could have done without accessing to HPC because the system that was simulated were relatively small. And in fact, even if you were using HPC, like in the range of few minutes, you could have received the output of the lab session. But from now on, the end zone will become more demanding. So the exercises will require more computational resources to be executed. And there will be no way you can afford to perform the end zone if you don't use the HPC resources. So for this reason, we invite you to always use or try to use the HPC remote commands so that you can warm up and learn how to use it. Because when the exercises will become more demanding, that will be the only way to make the end zone and to follow the class. So anyway, even this communication will be sent by mail within today. But in general, we strongly invite you to use the remote commands to HPC resources. And if you have trouble, we have the Slack channels and we can assist you also in person. Thanks. I think we can start. OK, yes. So welcome back. And so we started the end zone session on the FDAPT. And the speaker of today is Andrea Urlu from ITH Switzerland and Yuri Timrov. And the tutors are Davide Tissi from CISA, Mandana Safari from CISA, and Israel Stavich Rich from the University of Trieste, and Matei Yusef from the National Institute of Chemistry in Ljubljana. Please, Andrea. All right. So thanks for the introduction. I am Andrea Urlu, the speaker of today's end zone session. And today together with Yuri, Davide, Mandana, Serdian, and Matei, whom I thank in advance for coming at the rescue in case you experience any issue. As I was saying today, we are going to see an end zone session on the first part of density functional perturbation theory that has been introduced in the previous session by Professor Stefano Baroni. And in particular, we will see the application to the calculation of phonon frequencies and phonon dispersions. So during this end zone session, I will give you from time to time some introductory basic concepts, which have already been covered in the previous session. But I believe it's important to repeat them and to see them again, because we need them to, in order to better understand how we perform the calculation and which kind of information we find in the output files. After the quick introduction, we will have five exercises to try to perform. The first two of them are devoted to compute the phonon frequencies at gamma and the complete phonon dispersion for a non-polar material. In our case, it will be silicon. The second couple of exercises will be, again, about computing phonons at gamma and the complete phonon dispersion. But this time with a polar material, where the long-range correction that has been discussed in the previous session becomes important and it has to be introduced. And finally, time permitting, we will go to the final exercise, which is about the calculation of a complete phonon dispersion for a two-dimensional material. In our case, it will be for a nitride. So let me start by giving a very quick introduction on the basic concepts that we have already seen in the previous session. So we consider an infinite crystal whose unit cell is made up of a given number of atoms that we identify with n-opt. Now, if we level with an index s, the atoms inside the unit cell, this index s will go from one to n-opt. And if we further identify the Cartesian coordinates with this Greek letter alpha, then the atomic displacements will level them with US alpha. US alpha being the displacement of atom s inside the unit cell along the Cartesian direction alpha. The atomic displacements are the perturbation that we are interested in and that are linked to the phonon frequencies, to the phonon perturbation. So this is a particular case of a perturbation that we apply to assist perturbation theory approach. I remind you from solid state physics that the number of possible atomic displacements is three times the number of atoms, three Cartesian coordinates times the number of atoms inside the unit cell. And therefore, the number of phonon frequencies that we get at a given wave vector is exactly three times the number of atoms. For instance, if in the unit cell you have a basis of atoms made up of two atoms, then you expect to have six phonon frequencies, three times two. One of the central quantities that the code computes, actually, is at these interatomic constants, which, as you have seen in the previous set, the total energy of the system with respect to the external perturbations that, in our case, are the atomic displacements US alpha and US prime beta. If for the phonon, for the atomic displacement US alpha, we make an assumption and ansatz and we write it as a standing wave. So an exponential factor in real space e to the i q dot r and an exponential factor in the time domain, so e to the i omega t, and we insert this assumption inside the classical equations of motion, so the Newton's equations. We end up with the following expression that you have already seen. That is that you recognize as an eigenvalue problem, a secular problem, where the matrix that we are going to, that we need to diagonalize is this matrix d defined in the following way. So it is linked to the second order derivatives of the total energy, the interatometrics, and the eigenvalues of it are the phonon frequencies, omega square, whereas the eigenvectors are linked to the atomic displacements. Actually, this u tilde, to be precise, is the atomic displacement US alpha divided by the square root of the mass of the atom. Now, our final goal would be to compute the phonon frequencies. To do that, the code will compute for us the dynamical matrix using density function perturbation theory. Then it will diagonalize it and will print out the phonon frequencies and also information about the phonon eigenvectors as we will see. So as you can see, this is a formulation at a given wave vector cube. So first of all, we are going to see how to perform this kind of calculation for at a given wave vector of the perturbation. I remind you that q is the wave vector of the phonon mode, which is related to the wavelength of this perturbation, of the phonon perturbation. So we start, first of all, before starting with the exercises, let me give a quick overview of which kind of systems the phonon code can address. There is a rather wide variety of type of materials and methodologies that the phonon code covers. So you can use it to simulate, to compute the phonon frequencies for insulators, also polar insulators, as we are going to see, where the LOTO splitting appears for metals, for magnetic systems at the simple scalar relativistic collinear level, LSD-A, which I guess you will see on Monday. Also with spin orbit coupling, if you wish, you can perform electric field calculations to compute the Born-Effective Charges and the Dielectric Tensors, I think. We will see in exercises 2A and 2B. And let me just mention that recently, the theory has been extended also to compute phonons for magnetic systems when spin orbit coupling becomes important, and also it has been generalized to introduce, to cover the DFT-plus-U approach, also with ultra-soft pseudo-potensions. So now, I think that's all for the introduction. Then I will, from time to time, give you some other piece of information when we need it. So we start from the first exercise, where we are going to compute the phonons at gamma for non-polar materials, silicon in particular. So please go to the directory devoted to example 1A. So you can reach it either browsing the folders in the graphical interface or inside the terminal, typing this comment. And if you list the files inside the directory example 1A, you will find four files and a reference folder. As usual, the reference folder contains benchmark results, the output files that I got from a previous run so that you can compare with those output files, if you wish, to check your calculation. The readme file contains information about a very, let's say, condensed information, a summary of the steps that we are going to perform. You can double-click on it and the readme file will open in the internet browser, so you will have it readable there. And then we have three input files. So first of all, before starting, let me make a remark. In the previous session, I've seen that in order to compute the responses, in our case, the following frequencies we're interested in, we need to solve a linear system. This linear system computes, basically, the allows us to compute the first order derivatives of the wave functions. But if you remember, on the right-hand side of that linear system, we had the projector on the conduction states applied to the derivative of the self-consistent potential applied to the unperturbed ground state wave functions. So a key ingredient that we need to perform the calculation is actually the ground state conch arm orbitals. So first of all, we need to compute these ground state conch arm orbitals. And to do that, we have to run the usual PW calculation that, by now, I guess you are quite confident in doing and performing. Then we will see a new executable, a new code, which is ph.x, which will perform the density function perturbation theory calculation. So let us, first of all, perform the PW calculation. We have a quick look at the input file. You have seen it already many times during this school. So I will go very quickly on that. We are going to simulate silicon. Silicon has a CC structure, a CC bra velattis. So bra is equal to 2 in this case. Here you see the size of the cube. We have two silicon atoms inside the unit cell that are located at the positions 0, 0, 0, and 1 quarter, 1 quarter, 1 quarter in crystal coordinates. We function cutoff of 16 ribberg and other technical variables that you have already encountered and met before. And this is the pseudo potential that we use for silicon in this calculation. So to run the calculation, you can actually type PW.x and source the input file and redirect to the output file. So now I switch to the terminal. I reach the day five example 1A directory. You can either run it on the virtual machine, but also, as Ivan was suggesting, you can use the HPC resources. And to do that, you need to, well, the input files are already in the HPC cluster. So you can simply type remote MPI run PW.x and minus i v. And you specify the input file, which is PW.silicon.in. You submit the calculation. All right. So remember, in order to check the Q, so the status of your calculation, you just type remote sq. We see that the calculation has already finished. So in order to collect the output files, you can use the command rsync from HPC. And then between quotes, you write start dot out in order to get all the output files. In our case, it could be simply the PW output files. And indeed, we received, as you can see, the PW.silicon.out file. All right. I'm not going to open it again. You have already seen several times in the previous sessions. Remember that this step is necessary for us in order to compute the ground state conchamorbitals that we need for the linear system of density functional perturbation field. All right. So now we need to perform the actual phonon calculation. So we now meet a new input file, the input file of the executable called ph.x. So remember, if you wish to have a look at a complete list of the input variables, you can open your browser, go in the toolbar, and select phonon and ph.x so that you will access a complete list of possible input variables that you may want to specify depending on the kind of quantity of calculation that you want to perform. The structure of the input file is made up of basically one card that is named input ph, closed by the usual slash. And afterwards, below, if you are performing a calculation for a given wave vector of the perturbation, you specify the wave vector here. So here you specify the coordinates of this wave vector in terms of units of 2 pi over a, where a is the lattice constant, basically. In our case, the size of the cube of the FCC cell. And in this case, we are going to perform a calculation at gamma. So the phonon perturbation has exactly the same periodicity as the unit cell. Inside the input ph card, this is the minimal information that we need to provide the code with in order to perform the calculation. We need to specify the prefix, which, remember, this is important, has to be the same for the PW calculation. This prefix is important because tells the code the prefix of the philes, where it can find the wave functions, the conchem orbiters that it needs to perform the density function perturbation, theoretical estimation. Then we specify the mass of the atom. This amus is the mass of all the atomic species that we have in the unit cell. In this case, we have only silicon. So we specify only one. And this TR2 underscore ph is a threshold for self-consistency. I told you before that the code solves a linear system. And this linear system, as you have seen in the previous session, needs to be solved self-consistently. So also in this case, we need to have a threshold in order to decide when this linear system is self-consistently solved. So the default value for this threshold is 10 to the minus 12. You can lower it if you wish. I decided to put it equal to 10 to the minus 14. Finally, the outdoor directory here is commented because we are using the environmental variables of quantum espresso. But if you wish, you can uncomment it. And you will use the temporary directory dot slash TMP. And finally, we specify the name of a file where the dynamical matrix will be stored. So remember, the dynamical matrix is the final product. Let's say that the following code will produce. And it will store it in a file whose name you can specify. And in this case, the specified silicon dot dean for the dynamical matrix. So now in order to run the calculation, the structure of the command is exactly the same as for PW. The difference is that now we need to call a different executable file that is ph.x. So we now go to the terminal. And we can use, as usual, remote MPI run, as we did in the previous step, remote MPI run ph.x minus input, the input file for the following calculation. So we submit it. We wait for it to percolate into the HPC cluster. All right. So here you see the command that is being as executed in the HPC cluster. We check if the calculation is running or if it has already finished. It has already finished. So we can collect the output file by typing rsync from HPC. We type the output file ph.silicon.out. All right. So we received the output file of the following calculation. And so since this is the first time that we are performing this kind of calculation, I am going to show you what this output file looks like. So after some general information and information about the parallelization and the geometry of the system, so you see the lattice parameter, the volume of the unit cell, the parameters of the calculations. These are similar to the PW code. You see here calculation of q equal to 0, 0, 0, which is the wave vector we specified in the input file, phonons at gamma. So we are computing the phonons at gamma. Here are the information about the atomic, the atoms inside the unit cell and their positions, the symmetry operations, the list of the k points, so something that you have already seen. And then here you have the actual calculation of the FTP. So this is a part where you have information about the self-consistent calculation solution of the linear system. So as I told you, in silicon we have two atoms inside the unit cell. Therefore, since we expect three times the number of atoms in the unit cell, phonon modes, we expect six, overall six, phonon modes. Now, these phonon modes are usually grouped by symmetry. So the first three modes are solved together. The linear system for the first three modes are solved all together. And here you see for each iteration some information about the quantity that the code checks in order to check whether we are converges or not. When this quantity ddVsCf squared is lower than the threshold we specified in the input file, the linear system is solved, basically, as reached self-consistency. So here we have 3, 10 to the minus 16, which is lower than the value 10 to the minus 14, which we specified in the input file. And therefore, the calculation stops. Then the code goes on and solves the linear system for the other three modes, number 4, 5, and 6. Again, you find information on the solution of the linear system. And finally, here you find the frequencies of the phonon modes. So as I told you, we have six modes. Therefore, six frequencies. Three of them you see here these are slightly negative. What does it mean negative frequency? Actually, this is a, let's say, default choice in order to indicate that the square of the frequency is negative. So you must understand that this means square frequency is negative. Therefore, the frequency would be imaginary. So this sounds like weird, in principle, a small problem because you wouldn't expect, let's say, imaginary frequency, which are a signature of an instability in principle, as Professor Baroni was saying in the previous session. But we will see actually that this is simply due to numerical noise. So let me give a comment about that now. Well, to do that, let me just go back also to the, close the output file. I go back to the presentation. And I comment on those frequencies. So here is a representation of the Rilvain zone of the FCC lattice. We are computing the phonons of gamma. And here are the phonon frequencies that we got. So you see that in this case, they are slightly different from the values we have found in the output file, these first three frequencies. So these first three modes that you find, acoustic modes, gamma. So remember, every time among all the possible phonon modes, you expect to have three acoustic modes. All the remaining modes are optical modes. So the acoustic modes at gamma should have an exactly equal to zero frequency. Why? Well, if you think about that, the phonon frequency, which is a frequency of a vibration, is related to the difference in energy of the system produced by the phonon distortion. A phonon mode is actually a lattice distortion. So if you imagine to distort the atoms following the phonon mode, the energy of the system in principle would change. And this change in energy depends, of course, on the amplitude of this distortion and can be written in terms of the frequency of the phonon mode. So now, if we think about acoustic modes at gamma, you may remember from solid state physics when these acoustic modes at gamma simply rigid translations of the whole crystal. So it's like we take the crystal and we rigidly shift along x, y, or z. So the question that I would like to ask you now is what is the energy difference in the system due to this kind of phonon, let's call it, distortion modes? Because it's not actually a distortion. Since we are just rigidly translating the system. So does a rigid translation of the system produce a difference in energy, a contribution in energy to the system? The answer is, of course, no, because of continuous translation in variance. And therefore, the energy of the system is unchanged. So if the energy of the system is unchanged, we expect, therefore, the frequency of the phonon mode to be exactly 0. Now, unfortunately, due basically to numerical noise and to numerical inaccuracies, it is difficult to have exactly equal to 0 frequencies in this case. Let me explain you why. So from the mathematical point of view, the fact that these eigenvalues of the dynamical matrix are 0 comes from a property of the interatomic force constants that is called acoustic sum rule. The acoustic sum rule can be derived. You find it, for instance, well, in every solid state physics book, for instance, Ashcroft and Mermin, for instance. And from continuous translational invariance, you can obtain this condition on the interatomic force constants. Now, unfortunately, this condition is not imposed by default on the interatomic force constants when we compute them. And therefore, due to numerical inaccuracies, to numerical noise when performing the calculation, these quantity might be slightly different from 0. The sum might be slightly different from 0. So when we have this sum slightly different from 0, we get those eigenvalues, those frequencies of the acoustic modes, slightly different from 0. And it may happen that their square can be either positive or negative. So as soon as we get acoustic modes of gamma that was frequencies are slightly different from 0, either positive, as in this slide, or negative, as we have seen here. And this value is not particularly high. So it is more or less 0. We shouldn't really be afraid of it. But nevertheless, there is a way in the code to impose these acoustic sum rules. So you can actually correct the foreign modes in order to get acoustic frequencies that are exactly 0. So to do that, we use another executable file. As you will see in this end-zone session, the foreign code has several executable files, depending on which task, a particular task, we are going to address. The main one is the pH code that we have already started to see. But the other minor, although important tasks, are addressed and performed by small other executable files. So whenever you need, in this case, to address, to insert a correction on the dynamical metrics, we use this code that is called dinmat.x. So remember, every time we need to add a correction to the dynamical metrics, we use this code. For instance, in this case, we can actually impose the acoustic sum rule, which is a correction on the dynamical metrics. And therefore, we use this executable dinmat.x. Information about the input variables can always be found. If you go, as always, on the toolbar, you click on phonon, and you click on dinmat.x. Here you find the list of possible input variables. You see that the list is rather short, because this executable file, this code, is rather small. It performs a restricted amount of tasks. For our purpose, the input file is very minimal. So the name of the card is simply input, and the two input variables that we need to specify are filled in and the acoustic sum rule variable. So filled in needs to be the same as in the phonon code because the dinmat code will go to read the dynamical metrics from the file produced by the phonon code. So remember, this filled in must be the same as we specified in the input file of pH. Then the other input variable is asr, which stays for acoustic sum rule, which can have several, we have several possibilities, depending also on the dimensionality of the system, but for the vast majority of the systems, to specify it as simple is enough. So in order to perform the calculation, the structure of the command is always the same. So you indicate the executable file minus i, the input file, and then the output file. So if we go now back, we use the remote MPI run. We submit the dinmat.exe executable file. We use the dinmat.exe executable file and we run using the input file. So we submit it. I had to use minus i. Remember that we always need to specify minus i and not the less symbol when we are submitting to the HPC cluster. All right. So this should be rather quick because just CalTode actually imposes the acoustic sum rule and then diagonalizes again the dynamical metrics. So we collect the output files. So we do rsync from HPC and then collect all the output files. I type start.out because the dinmat.exe code produces also a default output file by its own that is called dinmat.out. This dinmat.out contains the frequencies of the corrected dynamical metrics and that the acoustic modes now are exactly equal to 0. So the frequencies of the acoustic modes are exactly equal to 0. So those frequencies that before were negative, meaning their square was negative so they were imaginary, were nothing to really worry about. And they were actually waiting for us to impose the correction about the acoustic sum rule. OK. So now we just close the file. I go back to the presentation. This is, again, the output file we have just seen where the acoustic frequencies are 0. The other three frequencies are the frequencies of the optical modes. You see that they are degenerate. They are all the same. And this is because of cubic symmetry. So cubic symmetry imposes that gamma, the frequencies of the acoustic and also the optical modes are all the same. And all right. So now we can move to the second exercise. I see that someone has questions, maybe. I see a few raised ends. So we can take a very quick break if there are major issues. Yeah, Giovanni, you can ask. OK. Can you hear me? Yes. No, maybe it's a trivial question, but why do we? You can switch on your camera. OK. So my question is, why do we need a different program executable that imposes the acoustic sum rule? Can't ph.exe impose it by default? So is there a reason why we need sometimes not acoustic sum rule imposed? I don't know if the question is clear. Yeah, yeah, it's clear. I guess that perhaps it's due somehow to historical reason. Actually, when you do the calculation, you lose, say, continuous translational invariance because the real space actually in the code is sampled with a discrete grid. Therefore, you don't have, by default, the correct, let's say, acoustic sum rule. Then dint.exe is a utility that is usually called also, I guess, inside the ph code when we need to do the actual diagonalization of the dynamical metrics. But as we will see later on, it has been brought outside ph in order to include other types of corrections, like, for instance, the long range correction for polar materials. And acoustic sum rule has been inserted there also. I don't think there is a particular reason, although I should have a look at the code to understand whether there is a major issue for the acoustic sum rule to be imposed outside the ph code. So not a particular reason that I may remember. I don't know if Yuri is online. Maybe he knows also, because I look. OK, we can move on. Yeah? OK, so we go to the second exercise, so exercise 1b. Before we go on, before actually performing the exercise, let me provide you with some information about the very important concept of Fourier interpolation, which has already been introduced also by Professor Stefano Baroni in the previous session. I will repeat it, because I believe it is really important to understand how smooth phonon dispersions are computed in quantum espresso. So this is an example of a phonon dispersion. This is the phonon dispersion of Gallium arsenide. And using our knowledge that we got from the previous exercise, which was a calculation of phonon frequencies at a given wave vector, you may think that one can compute this kind of curves and obtain this kind of picture by running a lot of phonon codes that teach of these q vectors along the path in the Brillouin zone. So to construct this figure, this picture by points, let's say, so computing the phonon frequencies here, here, here, here, et cetera. This is a way to proceed, but it is not the smartest way. Actually, it is very computationally demanding, because as soon as you go away from gamma, you lose symmetries, and the phonon code becomes slower and slower in computing the frequencies. So it is not the fastest way to compute the frequencies that teach of these q vectors along the path in the Brillouin zone. There is a different way to proceed, which is a kind of interpolation technique, which allow us to save a lot of time and to actually compute the phonon frequencies in a very coarse mesh of q points instead of computing in a thick list of points in the path in the Brillouin zone. This interpolation is rather different from the usual interpolation techniques that you are maybe used to, so linear, quadratic, or in general, polynomial interpolation. And it is called Fourier interpolation because we need to go back and forth from the real space. Let me give you an overview about that. Now, suppose that we want to compute the phonon frequencies on the very long list of wave vectors that are found in the high symmetry path in the Brillouin zone. Now, we do the following. We compute the phonon frequencies or, let's say, the dynamical matrices that you identify with this general function f in a coarse grid of q points, so very few q points. Now, if we want to then interpolate these dynamical matrices in order to get the phonon frequencies in different q points, so those that are in the high symmetry path, we perform the following procedure. We can use the knowledge of the dynamical matrices in the coarse grid of q points to produce an approximated function that's an approximated representation of the interatomic force constants, which are the Fourier transform to real space of the dynamical matrices. So basically, we take the Fourier transform of the dynamical matrices using the coarse grid of q points and we go to real space. So now, we have a knowledge about the interatomic force constants all over the real space. And then, we can do another Fourier transform, so to go back to the reciprocal space. But this time, when we perform the Fourier transform, we are free to choose whatever q vector we like. And therefore, we choose the q vectors to be those along the high symmetry path that here are represented with orange dots and are called q prime 1, q prime 2, and so on and so forth. So the blue dots can be as a low number as 8, 10, for instance, so a very low number of p points, whereas the orange dots can be as much as hundreds of them in order to have a very fine sampling of a dense sampling of the high symmetry path in the Brillouin zone. So each of the parts of this procedure is performed by a different executable file in the code. So the calculation of the dynamical matrices in the coarse, thin grid of q points is performed by ph.x. So the executable we have just seen. Then the Fourier transform to real space is performed by q2r whose name is rather self-explanatory because it goes from q space to r space. And finally, the Fourier transform from real space back to reciprocal space in order for us to get the full frequencies to, in whatever list of q points we like, is performed by matdin.x. Don't get confused. This is matdin and not the din mat that you have seen before. This is a different executable. So to summarize, if we want to compute a complete phonon this smooth phonon dispersion, we need, first of all, to compute the amplitude con and shem orbital, so pw.x. Then to compute the phonon, the dynamical matrices in the coarse grid of q points, so ph.x. Then to perform two Fourier transforms, one to the real space, so q2r.x. And finally, back to the reciprocal space, so matdin.x. Very good. So now that I have introduced, let's say that I have just picked up the Fourier interpolation technique that you have already met before, we go to the directory of example 1b. So in example 1b, you reach the directory with this command directory, the readme file. You can double click on it, and you will find the instructions to follow. And the input files of all the executables that I have just mentioned, so pw, ph, q2r, matdin. And finally, also plot band, because we need it in order to get the, let's say, post-process the data and to get the list of the frequencies in order for us to be able to produce the plot, which we will do with the GNU plot afterwards. So all right. So first step, I recommend you to use the HPC facilities because we have done already for the previous exercise. We perform step number one. I go and perform it very quickly. So let me clean the terminal. I go to example 1b. I submit the calculation with the text minus i, the input of pw. We wait a bit. All right. So I check the q. All right. This is running. So we wait a bit. I check it again. OK. It's finished. So I collect. Well, we can collect even afterwards. The output file, we just then use rsync from HPC, start.out, to get all the output files that we got from the calculation. Now, second step is to perform the form on calculation. Now, the input file for a calculation in a mesh of q points in the reciprocal space is slightly different from what we have seen in the previous example. If you remember, in the previous example, there was a bottom line after the below the slash, where we were specifying single q point. Now, instead, we will use the same q point. Where we were specifying single q point. Now, instead, we don't specify a single q point, but we specify a mesh of q points. So as usual, we ask the code to produce automatically by itself this kind of mesh. So we need, in order to do that, to specify this logical variable, ldsp, and to set it to true, this tells the code look. We would like to compute a form on dispersion. And then we specify the size of the mesh of q points along the reciprocal lattice basis vectors. So in this case, we're going to produce a mesh of q points that is 4 times 4 times 4. Here you find the way the code produces these vectors, these wave vectors q, that will identify the mesh of q points. So i, j, and k run from 1, 2, and q1, and q2, and q3, respectively. We run the calculation. So we go to our directory. We type, as usual, the mod mpi run. This time, ph.x minus i minus i, ph, the input file. Wait for it. All right. This time, the calculation should take a little bit longer because we are performing a form calculation at each of the q vectors of the q points in the mesh. So if you type remote sq, you might find that the calculation is still going on. Yes, it is. So we just wait for it. It should take, well, in the virtual machine, it was taking about two minutes. So perhaps in the class that it should take also less than that, it's still going on. Let me check it again. It's still going, so we wait a bit. So what the code is doing now is computing the dynamical matrices over the 4 times 4 times 4 q mesh. Let me, so the q mesh is placed inside the Brillouin zone of the FCC lattice in our case. And as we will see in a while, actually the mesh of q points is reduced by a symmetry in a way similar to what has been done, is done also for the k points. And this has been already seen before in the previous days. So in principle, the complete mesh of q points would contain 64 q points, but many of them are equivalent to each other. So the code checks which q points are equivalent to what, and the equivalent q points are eliminated. So only the non-equivalent ones are considered. In this case, as we will see in a while, we will have eight non-equivalent q points. So the foreign code is computing the dynamical matrices in these eight q points. Let me check once more. It's still going, so let me, I go just to check the calculation so we can do HPC if you wish. And this will let us enter inside the very same directory on the HPC cluster. And I use this command to check instantaneously what. OK, the calculation is just finished. So you see job done here, so we can just exit. We are back to the built-on machine, and we collect the output file so that we can have a look at it. So we do rsync from HPC, and we get all the output files. OK, so let's have a quick look at the output file of the foreign code. So basically here you have information about the mesh of q points. So here you see there are eight q points. These are the coordinates of the q points in units of 2pi over A. You go on. The information is similar to what we have found before for the single q vector calculation. And then here you find the self-consistent solution of the linear system at gamma. This is the first q point. Then we go on. We find the frequencies at gamma. Again, you see the slightly negative frequencies, which are due to the missing of the acoustic sample. Then we go on, and we should find the calculation for the second q point. So here you see computing dynamical metrics for this is the second q point in the list that we have seen before. And then you have, again, the solution of the linear system for all the foreign modes, and so on and so forth. And this is done for all the q points in the mesh, so all the eight q points. We have also, let me take it, the DIN files, which contain the dynamical matrices at each q point. So we can collect them. They are called silicon.dIN, and then they are followed by a number. So I type silicon.dIN star to get all of them. So you see there are nine files, DIN 0, DIN 1, up to DIN 8. The first one contains the list of q points that were already printed also in the output file. So you have information on the mesh 4.4 and 8, the number of the equivalent q points, and the Cartesian, let's say, the coordinates of the q points. And then in each of the remaining eight files, you have the dynamical metrics for each of the eight q points. Very good. So now we are in a position in which we have the dynamical matrices in a coarse grid in the reciprocal space. So in order to perform now the double Fourier transform, first of all, we go to real space. So from the knowledge of the dynamical matrices in q space, we perform this Fourier transform that you find written in the red square, in the red box here. And this is done by q2r. So basically we run q2r, which will apply essentially this formula where nq is the number of q points in the grid, which will be, for instance, in our case, 8. Let us have a look at the input file. Again, these very small executable files require a rather minimal input file. The input file for q2r contains the name of the file that contains the dynamical matrices, which once more, remember, must be identical to the one we specified in ph.x. Then it contains information on the acoustic sum rule that we want to impose. And then finally the name of the file where the force constant will be written. So the function that we get in real space, because remember, we are performing a Fourier transform from reciprocal to real space, will be written in this file that we call silicon444 because we are using a 444 mesh.fc. So we run the executable. So remote MPI run q2r.x minus i q2r.silicon.in. And we wait for it to finish. Again, this is just a calculation of the Fourier transform. It is rather fast. So it shouldn't take more than a few seconds. So we check if it finished. It did. So very good. And finally, the last step is to compute the Fourier transform back to the reciprocal space. So from the force constant we have just computed, we go to the reciprocal space in a set of q prime points that now are the points along the high symmetry lines. So to do so, we use the Matdin code. So basically, here we see a screenshot of the input file. We need, again, to specify the acoustic sum rule that we want to impose in order for the acoustic frequencies of the atom because that is needed because you remember it is found inside the definition of the dynamical matrix. So we need the mass of the atoms. Then the file containing the force constants, which has been produced by q2r. So important point, these must be the same as the one we specified in the input file of q2r. And finally, a file with the frequencies. So this will be an output file produced by Matdin where we will have a list of frequencies at each q point, at each of the q prime points. So at each of the points along the path. And finally, after the card input, we give the code a list of q prime points. So you see, this is the number of q prime points where we will compute the following frequencies. And that will make up our final picture, our final following dispersion. You see, here we specify almost 400 q points. So going back to the very beginning of this exercise when I was discussing the interpolation technique, if we were to compute the following dispersion point by point, so running a single form calculation for each of these 400 q points, well, you easily understand that it would have taken forever. Instead, with the Fourier interpolation technique, we just compute the following frequencies on 8, in our case, 8 q points. So the 4 by 4 by 4 mesh. And then by means of this trick of this Fourier transform interpolate the frequencies to whatever number of q points we like. In this case, we chose something near 400. So here you find, if you open the Matdin.input file, you find a complete list of the q points. So now we run Matdin. So remote MPI run matdin.x minus i input file. Once more, this is simply a Fourier transform. So although we are computing it much, we see indeed that it's already finished. And now we collect with the command rsync from HPC. First of all, all the output files, all right. So we have also the matdin.silicon.out file in the q-twar. And then we collect also the silicon.frequency.frac file, which will be needed to plot the following dispersion. So rsync.hpc.silicon.freac, all right. We have a quick look at it. So here you have basically six bands, so six frequencies, because we have six modes for each of the 396 q-points. And here you find information about the q-points, the coordinate of the q-points in the first line, then the frequencies, the six frequencies, second q-point, and the frequencies of the second q-point. Third q-point, and its frequencies, and so on and so forth. And this is done for all the 396 q-points along the high symmetry path. In order to produce the plot, this is the very last step. I would say we run plot-band.x. This is a minimal information on the input file, which I guess you perhaps you have already seen in the previous days, perhaps you have already used to compute to plot-bands. If I remember correctly, it is a serial program, it is a serial code, so we run it directly here in the virtual machine, which I plot-band.x. And with the input file, redirect to the output file. OK, so let me see. OK. And so the plot-band.x executable produced these frequency.ps and frequency.plot output files, which already contain a version of the pitch that we want to get to the following dispersion, but to get it more appealing, let's say, we then use the GNU plot utility to compute, to represent, to plot the phonon frequencies. So I already prepared for you an input, let's say, source file plot dispersion.gp, the GNU plot file, which will produce the final version of the picture, and we can have a look at it using the atree command. And then phonon dispersion.eps is the file. So here you have, well, this is a, let's say, more complete final picture that contains also other graphs that we got from other calculations. That I got from other calculations. Let me go back to the presentation in order to comment on it properly. So this is the phonon dispersion we got, obtained with the grid 4 by 4 by 4. You see the first, the lowest frequency are the acoustic modes. The modes with higher frequency are the optical modes, with representation, again, of the Brillouin zone. And however, one may ask whether our technique, our approach with the Fourier interpolation technique is accurate enough. Because, you know, basically we are using, we are computing the dynamical matrices in a very coarse grid of two points. So the representation in real space of the interatomic force constants that we get from the Fourier transform is, of course, an approximation. So how good is this approximation? And how good is the interpolation that we are performing? This, of course, depends on the size of the mesh of two points. So as usual, the thicker the mesh, the better the approximation of the interatomic force constants, the smoother, let's say, sorry, the more accurate the interpolation is. How to decide whether our mesh of two points is enough thick enough to get a reasonably accurate interpolation? Well, there are two ways to perform this kind of convergence test. The first way is to compute the very same phonon dispersion with several meshes of two points. For instance, here, I produced a plot also with the mesh with two by two by two points. But you can do the same for thicker mesh, if you wish. You see that going from the two by two by two to the four by four by four mesh, basically the phonon dispersions are still changing in a rather evident way, I would say. And this means that for sure the two by two by two grid of two points is not thick enough to get an accurate representation of the interatomic force constants. Now, you can, if you wish, go on to the six by six by six or eight by eight by eight meshes. But remember that as you increase the thickness of the size of your mesh, there are more two points. And therefore, the execution of ph.x will be slower and slower. For doing so, I would highly recommend you to use the HPC facilities. So as the homework that I leave you now is for later on whenever you wish to perform this kind of calculation, you can, for instance, perform calculations for other q grids, two by two by two, six by six by six. You collect the silicon dot frack file then that you obtain after performing the math in the math in calculation. You run plot band for each of these two point grids. And you can represent them, plot them using the GNU plot script that I have prepared for you. So you can include then in the GNU plot script the names of the files containing the frequencies for each two point grids and get a final nice picture. Another way is to do a direct calculation. So as I told you, since these curves that we obtained are an outcome of an interpolation, you can check the interpolation by simply computing the real, let's say, the real dynamical matrix and the phonon frequencies of that particular q vectors, for instance. So basically, we are doing in this case single q vector calculation. And for each q vector, we compute the phonon frequencies and we represent them with a blue square. So I have already performed this kind of calculation for you just for the sake of clarity of showing the result. And you see that the single q vector calculations fall quite well on top of the red curve that you obtained with a four by four by four mesh. So we can conclude that the four by four by four mesh of q points is thick enough to allow us to get a reasonably accurate interpolation of the dynamical matrices and of the phonon dispersion. If you wish to reproduce this kind of plot, I leave you another homework so you can run a single q point calculation using the templates that you find in the example 1A. So basically, you need to specify the coordinates of the q vectors and that each time you use the coordinate of a different q vector. So this, that, that, and so on and so forth. And I provided you with already a list of possible q points that you may want to use. You find them in the reference folder in this text 5, q points, direct calc. So these are two ways of checking the convergence of, of checking the convergence of the Fourier interpolation. So it's rather, let's say, long, I would say, and divided into parts, this kind of Fourier interpolation technique, but I hope I have been clear enough to let you understand what are at least the important main steps that you need to perform in order to compute a complete smooth phonon dispersion. This is also a comparison with some experimental data that you can find online. And you see that the red curve reproduces very well the experimental data that are identified with these magenta dots. Yeah, a word on a, let me say a word on a phonon mode visualizer before we move on. So as I told you, acoustic and optical modes are rather simple to imagine and to understand at gamma. Acoustic modes are rigid translations. Instead, optical modes are motions are characterized by motions of the atoms in the unit cell in counterpays. So if you have more than one atom in the unit cell, for instance, for silico, you can one atom in the unit cell, for instance, for silicon, we have two. The optical modes at gamma are characterized by the two silicon atoms moving in counterpays. So when one moves up, the other moves down, and so on. But as soon as you go away from gamma, the wavelength of the phonon perturbation does not match the unit cell. So the periodicity of the phonon perturbation is different from the periodicity of the crystalline. It is way harder to understand and to imagine how these phonon modes are, let's say, represented with atomic, in terms of atomic displacements. There are several tools online, but the one that I like the most is this one produced by Materials Cloud, that you can find in the Materials Cloud website. This reads directly the input file of PW and an output file produced by Matdin that is called matdin.modes. So if you feed this online tool with these files, you can get the phonon dispersion, which is in the matdin.modes, this information on the frequencies in the matdin.modes file. And then you can click interactively on the phonon dispersion. You can choose a Q vector and the phonon branch that you like. And you will see how, in an interactive way, how the atoms inside the unit cell move. And you always can have the chance to see also the unit cell of your system, so as to understand how the phonon displacements are modulated because of a Q vector. So just a final word, still on Fourier interpolation. Now, the Fourier interpolation technique has been already said also in the previous session, works very well when the interatomic force constants are sorry, when the phonon dispersion is sufficiently smooth. When you have variations, let's say, of the wiggles of the phonon dispersion is in a very short range of Q points in the Brillouin zone, then you wouldn't be able to capture them with the Fourier interpolation technique. There are cases where this happens. For instance, for conanomalies in metals, conanomalies are basically wiggles of the phonon dispersions. And these wiggles are very narrow, let's say, they appear in a very short scale in the Q space. So if you do a Fourier interpolation, you might not be able to get them. And another class of systems where the Fourier interpolation somehow fails or needs some more advanced correction is the polar insulators. In that case, as Professor Baroni already explained, the dynamical matrix has also a non-analytical part. That non-analytical part cannot be computed, addressed by the phonon code itself in the self-consistent solution of the linear system, but needs to be added separately. So before, we will not cover conanomalies in this presentation, in this end zone tutorial, but instead we will address in the next two exercises the case of polar insulators. So before going on, I saw before there was a raised end. I don't know if the person would still like to ask the question or I don't know. Andrea, you can read the two last questions in the chat on Zoom. Okay, let me see if I can see that because I don't know if I am... Okay, all right. So... The last two for... No, okay, one is what are the differences in structure between DIN-MAT and MAT-DIN.DIS one, MAT-DIN.files, how to build the dot-frag file. All right, so the DIN-MAT file performs a correction to the dynamical matrices. So there you specify input variables that are related to acoustic sum rule, to long range correction, as we will see in the following exercises, whereas MAT-DIN is a different code which performs the Fourier transform back to the reciprocal space. So the main difference is that in MAT-DIN, you basically provide a complete list of Q points that you want to for the Fourier interpolation, whereas in DIN-MAT, well, it's a different code. It has only corrections on the dynamical matrix of a single Q vector. And then the dot-frag file is produced by default from plot-band. So let me go back the plot-band file. Andrea, sorry, just to let you know, you have half an hour left. Yeah, thank you very much, okay. The dot-frag file is produced by the plot-band code. So it is a default output file produced by that code. So let me see again the other question. How to know the best number of Q points for the inverse Fourier transform? Okay, so well, you can, let's say the Q points that you specify in MAT-DIN.exe, you can, in the input file of MAT-DIN.exe, you can choose whatever you like. You can try and see with a given number, see how the plot looks like. If you choose a too low number of Q points, the phonon dispersion will be too much piecewise, let's say, like several segments. So I would suggest some hundreds of Q points in order to have a rather smooth phonon dispersion. Okay, so I guess it's in a flight, Yuri, or are there any more questions? Sorry, I have some questions for me. So first, I missed something about the MAT-DIN.exe input file. Yes, I go back, let me go back to that. Because I think that after all the flags, there is the K-path specification. Yes. So that K-path specification is just the one taken from, for example, X-Case, then like for electronic constructions. Exactly, yes. But okay, but so the fourth number is the weight of the Q points, I assume. No, the fourth number is actually variable that represents basically the overall length of the Q-path. So if you were to represent all the Q-path in a line, as we are doing actually in the phonon, in the plot, the fourth number is a number that specifies the ongoing length of this line, basically. Okay, so it's the distance with the successive Q points. Yes, yeah, basically, yes. And it is an increasing number. So as you go on along the Q-path, this number increases, yes. Okay, so and then I have some other question general about the code. So first of all, if we can access from the diagonalization of the dynamical matrix, also the phonon eigenvectors, so the polarization vectors of the phonons. Yes, the eigenvectors, I didn't mention them. You can read them in the .din file. Okay, so the silicon.din, for instance, contains both the frequencies and the eigenvectors. So you can have a look at them there. Okay, and does the imposing the August 6th some rule also changes the eigenvectors accordingly? Yes, yes, because you are changing the dynamical matrix and the dynamical matrix is then diagonalized once more, therefore the phonon eigenvectors are cleaned, let's say. Okay, and so last thing, what is the phase convention chosen to compute the dynamical matrix? Because you can use as a phase only the lattice vector or the lattice vector plus the atomic position in the cell. And this doesn't change the frequencies but changes the eigenvectors. Oh, okay. I guess it's the lattice vector if I am right. Okay, thank you. Andrea, this question from YouTube. Yes, yes, yes. So there's a question, where do I specify the high symmetry Q points in the input for the dispersion curve? Okay, well, the high symmetry points are not, or the high symmetry points, sorry, for the dispersion are specified here in the math, being dot X5. So below the card input, you find a number where that tells you the number of Q points in the high symmetry path. And then below you find the complete list. So in the math being dot X, the input file of math being dot X. Yes, if I can add, there is also another possibility to add two keywords. It's like Q underscore. In band form. Exactly, band form, yes. And also in crystal units, I think. So in that case, yeah, you can explain maybe that part. Yes, there is a possibility also to specify the path by giving only the initial and the final point of each segment of the path. For that purpose, you need to specify the flag Q in band form. It is something that you find in the description of the input variables of math being dot X, that you can access from the full part of the internet browser of the virtual machine. And in that case, you specify simply the first and the last point of each segment so that you don't need to provide a long list of Q points. And of course, you specify the first, the last point and then a number that says how many points you want the code to put in between the two extrema, let's say the initial and final point of the segment. Okay, may I go on? Yes, of course. Okay. So no more questions. Well, if you want the timing. Yes, there are some, but maybe you can answer later or so Bachati. Yeah, I think I can go on and... Yes, I think you should go on. All right, so for the exercise two way, well, this is a brief recap about the theory that has been shown in the previous session. So when we are dealing with polar insulators, there is also an additional non-analytic term for the dynamical metrics. Well, non-analytic means that it depends on the direction we are choosing to approach gamma. So this is the true meaning of non-analyticity. So this contribution will depend on which direction we are following to approach the gamma point. This quantity is additional piece depends on the born effective charges Z and on the epsilon, which is the dielectric tensor. And both of them can be computed with an electric field calculation. So in order to do so, let me let us perform the exercise. The first step is as usual, PW.x calculation. So I go to example two way. We use as usual remote MPI run PW.x, minus i input file. This time we are going to address to study aluminum arsenide, which has the same structure as silicon. So FCC with two atoms per unit cell. One is aluminum and the other is arsenium. So we check the status of the calculation which is finished. And then we perform the phonon calculation. So here the only difference in the input file from previous examples is that we specify this epsilon logical variable and we set it to true. And this will allow the code to perform an electric field calculation. So they're response to an external electric field, which will allow the code to compute the born effective charges and also the dielectric tensor. So we run now the phonon code, so PH.x minus i the input file. This is a single Q vector calculation. Similarly to exercise 1A. And indeed the Q vector here is 0, 0, 0. So we wait for it to finish. And then we will have a look at the output file. At this stage, we are not inserting yet this correction that is needed for polar insulators. So we are computing only the analytical part of the dynamical metrics. Therefore, the LOTO splitting that has been mentioned before which is an effect purely due to the non-analytical part is will be absent basically. So we take the output files, right? We have a look at the phonon output file. So you can go directly from the slides. So if you open the output file, you will see that the frequencies, well, you have here the information about the dielectric tensor and also the effective charges. This can be found in the output file if you browse it. And then if you look at the frequencies, you see that the three frequencies of the optical modes are exactly the same. So this is still due to the symmetry of the system because the system is cubic. But then as soon as we include these non-analytical effects, splitting will arise. So now I'm going to show you how to include this kind of non-analytical correction. As I told you at the beginning in exercise 1A, every time we need to introduce a correction on the dynamical metrics, we perform a calculation with DINMAT. So we run DINMAT. Here I show you the input file with respect compared to the previous exercise, to the previous time we have seen it. This time we are specifying also the Q vector that appears inside the correction, this Q here. Because as I told you, this non-analytical correction depends on the Q vector, depends on the direction we are approaching gamma. Basically, it's like we are taking the limit of the dynamical metrics for Q that goes to gamma. But since this quantity is non-analytical, we need to specify along which direction we are actually taking the limit to gamma. So in this case, I am performing a calculation along 1-0-0 and we can quickly do it. I just show you how to type the command and then I just show you the output file in the slide. So remote np run DINMAT.x minus I DINMAT.input. Once more, this is a very short, quick calculation. You can always check with remote SQ the status of your calculation, which has already finished and then get the output files, which is DINMAT.out. Okay. So if you open the output file, you will see that now the optical modes are not degenerate anymore. Two of them are still degenerate and these are the transverse optical modes. So modes for which the motion of the displacement of the atoms is perpendicular to the Q vector, instead the longitudinal mode has a higher frequency. All right. So this is about the allotio splitting at gamma. The exercise to be instead is aims to compute a complete phonon dispersion in aluminum arsenide so that you will be able to see the allotio splitting in the neighborhood of gamma also. As usual, we switch to the example to be folder. Now, this time I made things a little bit harder for you because I raised a few input variables from the input files, but I have not been so mean, let's say, to complete erase them, but I left some information inside the input files. For instance, in the ph.input file, if you have a look at it, every time I erase the input variable, I put a comment on the right. So you see here, you need to specify a prefix and remember it has to be the same as in the PW input file. Then you need to specify the mass of aluminum, the mass of arsenium, and then the size of the mesh. So this is a more kind of really hands-on exercise. So you need to open the input files, check if there is some information missing, insert it and try to run the calculation. So you can then follow the steps that you find in the readme file, which are exactly the same as in the exercise 1b, but since I see that we have more or less 10 minutes before the end of the session, let me just ask you to perform the exercise by yourself and report problems, issues, if you find any in the Slack channel and I will be glad to answer and to provide you with that if needed. So I just showed the result. Basically here is a recap of the steps that you need to perform. It's not a long exercise. I mean, ph.x takes as usual few minutes. So if you want, you can try to run it also now, but remember you need to carefully check the input files because if you miss any input variables that I raised, the code would crash of course. But let me show you the final picture that you should obtain. This is the phonon dispersion of aluminum arsenide and here you see that at gamma, approaching gamma, we have a clear splitting between the transverse optical which are the lowest frequency ones and the longitudinal optical which is the highest frequency one modes. And so this splitting you always have when you are approaching gamma. And the size of this splitting, as I told you, may depend on the direction you are following when approaching gamma. This is the meaning of the true meaning of non-analytical behavior or non-analytical contribution. So just to go to the third and final exercise, I sketch what you are supposed to do. Basically, this is a calculation of a complete phonon dispersion. So again, it is very similar in terms of the steps you need to perform to exercises 1b and 2b that we have just seen. So from the practical point of view, it's rather easy. You follow the instructions in the readme file and you are able to perform the calculation. But what I would like also to point out here is a physical effect that arises in this kind of two-dimensional materials. So as you have seen before in the school, when you are dealing with long-dimensional materials, you need to specify also this assume isolated variable inside the input file of Pw. This allows you to cut off the dipole-dipole interactions, if any, between the system and the periodic graph. So for the phonon code, there is a similar flag that is called LOTO underscore 2D, which is a logical variable that needs to be set to true when we are dealing to low-dimensional, in this case, two-dimensional systems inside the input file of Q2R and also in matdin.int. So this needs to be done because as has been pointed out in a recent paper, even if we are dealing with a two-dimensional polar material, the LOTO splitting is killed in that case. So if you are working in reduced dimensionality, the LOTO splitting at gamma is vanishing. And in order to be able to reproduce it, you surely need to set this variable to true. So here is the very sharp summary of the steps you need to perform. And since we have slightly less than 10 minutes left, I don't think we are able to perform it completely because this time the phonon calculation will be slower. It takes about five to 10 minutes, also in the cluster facilities, in HPC facilities, but you can still try to perform it. As usual, you use the remote MPI run command to send the calculation to the cluster. So first of all the PW calculation, okay. And then the phonon calculation, the Q2R and the matdin calculation. Then you collect the files obtained with matdin and you run on the virtual machine plot band in GNU plot to get the phonon dispersion. So let me just see here. I would like just to send the phonon calculation, which however will take a while. Sorry, remember always to specify minus i. And, yes. Can I interrupt? I think this is the example that will not run with 20 processors, right? I'm sorry, yes, you are right, Tony, I forgot. Remember to, this example in the HPC cluster doesn't work with 20 processors. That is the default value that has been set for this kind of calculations. You need to specify a different number of processors for the parallelization, which we chose to be eight. So you need to correct the command by specifying n approach equal to eight before the command. So n approach equal to eight and then remote MPI run and the executable minus i the input file. So you send it since I already sent the input file the phonon calculation, let me check. Okay, probably it already crashed. So no problems, I don't need to cancel it. And the same for the phonon calculation. So I go back to the remote MPI run phonon here and they specify n approach equal to eight before the command. Okay, and in this case, the phonon calculation will be performed by the HPC cluster. As I told you, it takes about five to 10 minutes. So I don't think we have time now to see again the output files or complete the task. But I am confident you should be able to run also Q2R and Matdin, which are very fast, very fast. And you should obtain the following phonon dispersion where you see a couple of key signatures of low dimensional materials. One is that the third acoustic modes, mode, which has atomic displacement perpendicular to the plane of the material are not, do not increase linearly going away from gamma but the increase quadratical. This is a key signature that is found also in graphene, for instance, and this kind of mode is usually called flexural mode. Instead, concerning the L-Auteo splitting, although Boron I tried is a polar material, you see here that there is no L-Auteo splitting at gamma which is again a property of physical effect due to low dimensionality. And that has been explained in a paper that I mentioned also in the very last slide of this presentation. A different way to proceed if you want is to not use assume isolated but just increase the vacuum. So the empty space between the replicas but this is a rather inefficient way to proceed because you see it appears an L-Auteo splitting which gets lower and lower as you increase the vacuum space but then you need a very large vacuum space in order to get zero L-Auteo splitting. Eventually, you would need an infinite vacuum space because if this is a calculation I have done by myself so I'm just showing you this result, if you plot the frequency, sorry, the L-Auteo splitting as a function of the thickness of the vacuum space, you see that to go to zero, you would need an infinite vacuum space basically because this dependence is linear in a log log scale which means that it decreases as a power low basically with increasing vacuum space. So you can try to perform the calculation by yourself and report problems. Of course, I would be more than glad to, I mean, I will be more than glad to address the problems. I leave you, you can find these slides on the website of the school. I leave you with a very, very minimal bibliography on the topic. You can use solid state physics books to cover phonons and L-Auteo splitting. Also, it can be found in this book about optical properties by Mark Fox. And then the fundamental papers that drew basically the foundations of density function and perturbation theory, the one by Baroni-Giannozian test of 87, the one about phonons in 91, the review of modern physics of 2001. And if you're interested in the effect of low dimensionality on the L-Auteo splitting, you can have a look at the paper number for this report. But this is, as I told you, a very, very minimal bibliography and a more extensive study is needed to really cover and address the topic. With these, well, I'd like to just thank you for the attention. And well, it's 12.30, but if you want to stay a little bit more or five minutes, if there are still questions, I can take them now. Thank you. Okay, thank you very much, Andrea, for this insightful presentation with the old details and procedure how to perform these calculations. So I don't know if there are some questions some people can ask, can you raise the hands? I can browse also the chat in Zoom. Yes, you can have a look there directly. Yes. All right, so I go backwards. Why we have made L-Auteo to be true when there is no L-Auteo splitting into the, okay. So if you don't set L-Auteo to be true, you would get an L-Auteo splitting because the code thinks we are working in three dimensions and this is correct because actually you encapsulate your 2D material in a cubic box basically, which is three dimensional. And in that case, if you don't specify the L-Auteo to be true, you would find a, sorry, now, okay. You would find an L-Auteo splitting. Let me go quickly to the point. So if you don't specify L-Auteo to be true you'll find an L-Auteo splitting, which is unphysical. In order to, and you need to specify that L-Auteo to be true, which as a result will kill the L-Auteo splitting. Okay, so how we can compute the zero point energy using the foreign calculations at gamma point, okay. So for that purpose, you use, well, this is the thermal, the thermal effects can be computed using the whole foreign dispersion and this can be done. This can be done, for instance, with the thermo-PW utility, which is another code that can be downloaded and run into quantum espresso that computes the thermodynamic quantities, such as the zero point energy, and the specific heat and other related quantities. Then, let me see, okay, I don't see anymore, okay. How about bending modes in the z-direction for the last example? So bending modes, you mean, I mean, along the z-direction, so perpendicular to the plane, these are bending modes along the z-direction, so perpendicular to the plane, these are those flexural modes, I guess, which are the phonon branch with the quadratic dispersion near gamma, that is the flexural mode. With this calculation, you basically get all the phonon, the phonon branches. So perhaps I'm not getting correctly the question, I don't know if perhaps Yuri wants to add something if he understood better than me the question. Sorry, I was answering to the other, which one? Oh, sorry, yeah, there is a question, how about bending modes for the last example? Is it possible to compute? I mean, along z-direction, I guess this is a flexural mode. I'm not sure I know the answer. Okay, I think we can stop here, and for any other questions, you can write on these logs, and let's thank the speakers again, Andrea, and all the tutors, and of course, Stefan Boroni for the first talk. Yes, thanks a lot. Thank you. Thank you very much, and see you later. Later we have this guest lecture, so see you at half past two. Thank you, bye.