 V predleženju partefnje prvama doprimite se postane, zato se pričo se delajo v pribevnih postaj, kako se dobro, nešto bilo to zelo, na kаменjينi v CTD. Prejste, vižno odsleda nam, tudi, da ga bilo bolju delaj, to arranged correlation functional with the focus on the ones available in CP2K. I will discuss the grim dispersion corrections to cure a deficiency of such functional. Then, I will introduce the concept of pseudo-potenzial, and finally I want to add a few words about the basis sets. Znamenim v pomečvednih lektu, da elektronicno problem je soljeno v tem, da se vznešljemo z vsev tezvom vsega tačnja tezvom, da je zelo, da se vznešljemo konešam, načo je zelo, da se zelo, da se zelo. Norega vsev tezvom vsega je zelo, da je vseva apart from the so-called exchange-correlation energy-functional E-XC, which depends on the electron density rho. The name of this term recalls that it should contain both the exchange interaction related to the Pauli repulsion, when we are dealing with electrons like in this case, and the Coulomb correlation, that is a measure of how much the motion of one electron is affected by the presence of all the other electrons. DFT is an exact theory in principle, but to be used in practice, that is to be able to write and solve the Konešam equations, it requires a guess and approximation of the exchange correlation functional. During the years scientists have built and tested several functionals in order to provide educated guesses for this unknown term. The simplest approximation and therefore also the first one to be devised is the so-called local density approximation, or LDA. This assumes that the exchange correlation energy at the point r in the space is simply equal to the exchange correlation energy of a homogeneous electron gas that has the same density at that point. The analytical form of the exchange term is easy to retrieve in this particular case, while the most common parametrization for the correlation part are obtained by interpolating the accurate values coming from quantum Monte Carlo simulation of the homogeneous electron gas at various densities. By definition, the local density approximation ignores correction to the exchange correlation energy due to in homogenities in the electron density around the point r. Considering this inexact nature of this approximation, it may at first seem somewhat surprising that it was so successful in estimating, for example, many atomic properties. This can be particularly attributed to the fact that LDA gives the correct some rule to the so-called exchange correlation hole, meaning that there is a total electronic charge of one electron excluding from the neighborhood of the electron at the point r. In spite of its success, mainly for atomic properties, the local density approximation is known to overbind particularly in molecules. For this reason, in chemistry, more sophisticated approximation are commonly employed, as, for example, the generalized gradient approximation or GGA. This approximation attempts to incorporate the effects of in homogenities by including the gradient of the electron density. As such, one refers to this kind of approaches as semi-local methods. Of course, there is no unique form for the GGA, and indeed, many variants have been proposed in the years. Most of them are available on Cp2k. An example is the very popular Beke and Liangpar approximation, in short, B-Lip, or the Perdju Burke-Ern-Zernhof approximation, in short, PBE, that you use in the practicals and whose analytic expressions I have sketched in this scary slide for your reference. Of course, I have no time to describe them in details here. Just summarizing, the B-Lip authors found most of the functional parameters by fitting experimental data, while PBE functional was built mainly by analytical and theoretical arguments. The generalized gradient approximation significantly succeeds in reduzime the effects of LDA overbinding. But it is still problematic in some contexts, for example, the estimation of static properties, like the atomization and dissociation energies, the bound lengths, and the vibrational frequencies. But also the estimation of dynamical properties, such as the diffusion coefficients in liquids, like in water, due ultimately to a poor description, in this case of the covalent OH bond stretching of the water molecule. Historically, one of the first ways to go beyond the generalized gradient approximation brought to the development of the so-called hybrid functionals. These are a class of approximation to the exchange correlation energy functional that incorporates a portion of the exact exchange from Hartree-Fox theory, with the rest of the exchange correlation energy from other sources, both abination and empirical, and in particular from the generalized gradient approximation. A popular example of hybrid functional is the B-Lip functionals, derived combining the B-Lip correlation with the exact exchange in this peculiar way. And another one is the PBE-0 functional, whose correlation part comes from the PBE functional. Hybrid functional significantly improves the accuracy of all the molecular properties mentioned before. Regarding the cones, a drawback of hybrid functional is that the exact exchange term is computational expensive to calculate within the framework of plane wave basis sets. Another issue of the hybrid functional that really is shared with the previously mentioned functionals, and in fact is intrinsic to DFT, is that there are still difficulties to improperly describe intermolecular interaction, which are of critical importance to understanding, for example, chemical reactions, especially the van der Waals forces, and in particular dispersion forces. The incomplete treatment of this kind of forces can adversely affect the accuracy of DFT in the treatment of system, which are dominated by dispersion, for example, interacting noble gas atoms, or where dispersion competes significantly with other effects. This is, for example, the case of the biomolecules or large systems in general. How can we overcome these deficiencies? Let's go a bit more in depth of this problem. The London dispersion forces are a type of forces acting between atoms and molecules that are normally electrically neutral and symmetric, like the fulerance in the picture. That is, the electrons are symmetrically distributed with respect to the nuclei in the atoms, and therefore there are no net charges, but also no permanent dipoles. In fact, the London dispersion can be considered as a long-range electron correlation effect. If R is the separation distance between the two interacting neutral objects, the London dispersion energy term can be approximately described to a syntotically scale for large R as 1 over R to the power of 6. This should bring to mind one of the two terms, the attractive one in an energon potential that is the potential that you find in any molecular dynamics force field to phenomenologically describe the van der Waalts interactions. Now, this London part of the correlation is not included in the standard Konshamma DFT. Why? The technical reason can be traced back in the absence in the FTE of a correct description of the quantum fluctuation that is the excitation to virtual orbitals that is to unoccupied orbitals. The quantum fluctuation become important when and where the electron density is almost zero. This is the case when there is a large separation distance in between the neutral objects, where therefore the density does not contain significant dispersion signature. Various approaches are currently used and under development to accurately model the London dispersion interaction within the DFT. In this slide I have collected the names of most important ones. However, here I would like just to shortly describe one of the most popular, the DFTD method, also known as the green dispersion corrections, because this approach is used in the in the tutorial. Really, with the term green dispersion correction one can refer to at least three different models, developing the years and with increasing complexity. The most recent ones called the DFTD2 and DFTD3 are implemented in CP2K. In general, in the DFTD schemes, the total energy is calculated as a sum of the usual self consistent Konshamma energy as obtained from the choose and density functional and a dispersion correction, which is in turn a sum of two body and three body energies. The correction employed in the tutorial is the DFTD2 scheme, which, unlike the DFTD3, contains only two body terms. In this dispersion correction scheme the sum is over all the pairs of atoms. Rij is the interatomic distance between atoms i and j. S6 is a global scaling parameter depending on the choice of the employed functional. The g6 values are calculated from the empirical atomic dispersion coefficient according to this expression. And finally F-dump is a function that dumps the dispersion correction for shorter interatomic distances in order to avoid near singularities at small distances, but also mid-range table counting effects of correlation at interatomic distances. In the first lecture we discussed the concept of basis set. We mentioned the two main classes of basis sets, the localized basis set and the non-local or plainway one, and we also mentioned that CP2K takes somehow advantage of both of them. A drawback of the plainway basis set is that to describe atomic wave function a large number of basis functions are needed, much larger than the number necessary to reach the same accuracy with the localized basis set. In this table there is a basis set size comparison in representing a simple 1s radar type function, not how large the difference has the atoms. The real problem is to accurately describe the wave function for those electrons that are closer to the nucleus, because those wave functions show more oscillation near the nucleus than the ones associated to the outermost electrons. Therefore, an idea to overcome this plainway basis set issue is to replace the electronic degrees of freedom that are more problematic to represent with plain waves by effective potential to add to the miltonian in order to correct dynamics of the of the remaining electrons and compensate for the missing interactions with the removed electrons. Of course, we would like these potentials, or effective potentials, or pseudo potential as they are commonly called because they do not represent any real interaction, to be additive and transferable, which impose to choose only atomic pseudo potential that is a potential for each atomic species and to remove only core electrons that is the chemically inert ones. To sum up, we replace the, the idea is to replace the full potential in the Kone-Sham equation that is the all-electron potential with the interaction between with the interaction potential between the valence electrons plus the pseudo potential associated to each atom of the system. The core electrons are eliminated and the valence electrons are described by the so-called pseudo wave functions with significantly fewer nodes that is oscillation close to the nucleus. This allows the pseudo wave function to be described with far fewer basis function, making the plain wave basis set practically to use. In this approach only the chemically active valence electron are therefore treated explicitly, while the core electrons are frozen, being considered together with the nuclei as rigid nonpolarizable ion cores. Once chosen the level of theory to use, that is once chosen the exchange correlation functional to use, pseudo potential for each atomic species can be derived from an atomic reference state by requiring that the pseudo and the real or electron valence wave function have to have the same energies and amplitude, and thus the same density, outside a chosen core cut of radius RC. However, this condition, this simple condition is not a necessary condition, is not sufficient to uniquely determine the set of atomic pseudo potentials. Different alternatives are possible and additional conditions can be imposed. Therefore, many different pseudo potential recipes have been devised in the years with different features pros and cons. One of the most widely used classes of pseudo potential is the so-called norm conserving pseudo potentials, which require the four condition here listed to be met. Even these additional conditions are, though, not sufficient to uniquely determine the analytical form of the pseudo potential for any atomic species. Therefore, during the years many different kinds of norm conserving pseudo potentials have been developed, and here I listed some related references. Most of the available pseudo potential in CP2K are of this kind, including the Godecker-Theter-Hutter pseudo potential, or the GTH pseudo potential, sometimes called gaussian and dual space pseudo potentials, which are the ones employed in the practicals of these courts. As for many other norm conserving pseudo potential, the GTH ones are formed by a local part, which does not depend on the angular momentum, and a non-local part, which does. Some details are reported in this slide for your reference, but are not relevant now. What it is important to emphasize here is the reason why GTH pseudo potentials are so popular within the CP2K community and for us. These pseudo potentials are separable, a nice property for efficient computation. They give in effect optimal efficiency in numerical calculation, using plane waves as a basis set. Moreover, at most only seven coefficients are necessary to specify its analytical form, but above all they have optimal decay properties, all of them have optimal decay properties in both real and Fourier space. From this the name of dual space pseudo potential. Because of this property, the application of the non-local part of the pseudo potential to a wave function, usually the most computational expensive part of the calculations involving the pseudo potential, can be done efficiently on a grid in a real space. And a real space integration is much faster for large system, like biomolecules, than ordinary multiplication in Fourier space, since the scaling of this operation in the real space is quadratic with respect to the size of the system, while in the Fourier space is cubic. Therefore, the GTH pseudo potentials significantly contribute to the cp2k capability to scale very efficiently with respect to the size of the system. A feature is for which cp2k stands out from many other quantum codes. If you use pseudo potentials, as you do in the practicals by employing a GPW scheme, that is a Gaussian employee waves approach, the choice of the localized basis set has to be made in combination with specific chosen pseudo potential class. In the case of the GTH pseudo potential, cp2k offer many possible basis sets among which to choose. The first choice is about the type of basis set, the basis function to consider. We mentioned before this later function, in principle very suitable as a basis function, because they are very similar to the orbital solution of the Schrodinger equation for an atom. However, nowadays a more common and computationally efficient choice is to use the Gaussian type functions. This primitive function resembles less an orbital solution of the atomic Schrodinger equation, but combining some of them together in a linear way and building the so-called contracted functions, we can have results similar to the ones with the later function, but this kind of basis set is computationally more efficient, because computing integrals of Gaussians is much easier for a computer than computing integrals of later functions. The second choice in selecting the basis set is about its accuracy, which correspond in this case to the number of basis functions you want to use to describe the atomic wave functions. The smallest basis set employs only enough function for a minimum description of the occupied orbitals of the neutral atoms, and is called a minimum or single zeta basis set SZ. Zeta refer to the letter usually used for the exponent of the primitive functions. For example, for hydrogen and helium atoms a single zeta basis set has only a single S function for the elements in the second row in the periodic system. It means two S functions, one S and two S, and one set of p function, that is to p x to p y and to p z. The next improvement of the basis set is doubling all of the basis function used for each atomic orbital, producing a double zeta dz type basis. Then the next step up in basis set size is a triple zeta tz quadruple zeta, and so on. In the names of the basis set reported in this table, which correspond to the names you can find in cp2k, appear always the letter v, which stands for valence, and refers to the fact that by using pseudo potential our electronic degrees of freedom will be only the valence electron and not the coarse ones. In addition, as you can see from this table, while the size of the basis sets increases, they are typically complemented with additional function called polarization function and identify with the final p. In fact, to improve the accuracy in representing the molecular orbitals, function with higher angular momentum than the valence orbitals have been shown to become important, in particular for better description of bonding, but also for taking into account polarization effects, hence the name. In the tutorials, a double zeta valence basis set with a single set of polarization function is used. In fact, this basis set is the smallest basis set commonly considered suitable for production runs. Moreover, among the different basis set offered by cp2k, you will probably focus on the so-called mole-opt subclass, that is, the basis sets optimize for molecular calculations. This optimization is done by fitting the parameters like the zeta exponent of the primitive function and the coefficient of the contractive function with respect to a training set of small molecules form with different elements and with different coordination environments. OK, the lecture ends here. If you have questions and doubts, you can ask them in the Q&A session.