 So, Tane, in the lab of today you're going to show how to use this remote command, right? I was asking in the lab of today you're going to show how it works this remote command. Yes, I will try to do it now at the end of the presentation if I will have time. Otherwise, also Ari will probably show how to do that. Ari, okay. No, because I was saying it's better if we do it today because tomorrow I'm not here. No, no, no, because today, definitely today, because today the exercises start to be heavier and there will be some of them, they will be definitely run on the HPC cluster. Okay, okay, I see, okay. Perfect. Sorry, the noise it was me. Okay, I think we can start. Can we start? Yes. Okay, good morning, good afternoon, good evening to everyone. Welcome to the third day of the Max School on Advanced Materials Modelling with Quantum Express. So, I'm Alessandro Stroppa and today I will be sharing this first session. So, today we will, there will be a lecture on forces, stresses, geometry optimization, chasing the set of points, they match like plastic and band method. And just a few preliminary remarks after the talk, there will be a question session and in this case you can arise your hand and you can have the world to ask the question. Thanks for the questions. For those who are listening, attending on Zoom, we kindly ask to write their questions on the Slack channel in the day three thread. And so I'll leave the speaker of today are Pietro De Lugas from CISA Italy and Anton Cockal from Joseph Stefan Institute Slovenia. So please. Alessandro, just before Pietro starts, maybe just if we agree with Pietro, so how we do this question session because first the Pietro will present a talk, and then I will present a talk and so maybe I suggest that when Pietro finishes, then there is a session question for his talk. And then I can start and then after. Okay, so after after Pietro, we will have a question session. Okay. Okay. Yeah, okay. Yeah, I'm going to be okay. Thank you, please. Okay, so this is my, let me share my presentation, then I go to screen mode. Now, can you see the presentation? So, good morning, everybody and greetings to everybody. And I will give the first part of this lecture and I will talk to you about force, the force calculation, force calculation and how we can use this, this information to perform structural optimization. And I also added a few slides about doing molecular dynamics with PW dot x in the bone of an eye mirror and molecular dynamics. This is the outline of this part of the lecture first I will introduce briefly what are the structural parameters that I'm talking about. I will talk how forces are computed in quantum express or more than how the forces are computed. What is the theory behind the computation of the forces and the implication that this theory has and the caveats that one has to keep in mind when it's using this information to perform structural optimization. Then I will also speak about the stress computation. And then I will present the optimization methods that are present in the quantum espresso and PW dot x and quantum espresso and and I forgot to add here in the outline but there will be also a few slides about molecular, bone of an eye mirror molecular dynamics. So, structural parameters, what are the structural parameters, we define a system to be computed in PW in quantum espresso we specify the number of the atoms, the number of the atoms and the species, and in particular when we specify the species we specify the pseudo potential that is used to simulate that kind of ion, and then we specify the position of the ions we specify the spatial relation between the ions and we call it and I'm calling it here is the structure of our system. The structure of this, we are in quantum espresso as we as we already have in the first two lectures, we are using periodic boundary conditions so the structure of we are dealing with a periodic infinite system. So to specify the structure we need two things we need the position the atomic, we need to specify the atomic positions. But we need to also to specify the lattice parameters we need to specify how it out the periodicity of the cell is and how these atoms are repeated periodically, infinitely, infinitely times periodically in the space. Obviously, this obviously it's not possible to change the lattice parameter without also changing the lattice position because these two, these two quantities are sorry interact to keep them well distinguished we usually use the fractional fractional the so called fractional crystal coordinates. So in case the position of the unit position of the atoms is pressed in terms of the components of the of the lattice parameters. These ix i numbers that I'm indicating here are numbers that are included between zero and one so they specify the position of the atom in the unit cell. And they are a periodically repeated all over the spaces I was saying. All these structural parameters so that the position of the ions and the lattice parameter. And what we call a potential energy energy surface with where our system may move and the purpose of the structural optimization is to look for minima for me in this potential energy surface or to understand how the atoms may move in this potential energy surface. The first to be able to do this we need to be we need to compute the derivative of the potential energy surface with respect to the parameters. The first, the first derivatives that I will the first derivatives are the forces that are the derivatives of the energy with respect to the total energy with respect to the atomic positions. We can divide it in two parts one first part that is the other forces that are given by the derivative of the electronic part of the energy. And the second part is that given by the derivative of the electrostatic interaction between the ions. The second part to this part is a trivially is a trivially dependent only on the position of the atoms for the for the first part we for the first part to express it as only function of the position of the atom of the density we need to use the the Theorem that also that applies in the F T as it applies in standard quantum quantum mechanics and thanks to this theorem and thanks thanks to this fight we know that we can compute the derivative of the electronic part of the total energy just as the expectation value of the derivative of the potential with respect to the unique position. And for local potential also with this integral, with just the integral of the derivative of the electronic potential with respect to the density. OK, so we are using the Hermann Feynman theorem. And as long as the demonstration is easy, I will not give here a third demonstration, but just to say on what fact this is based. This is based on the fact that the energy is variational with respect to the density, with respect to the variation of the density. That means that at convergence, so when you are at the minimum of the energy, the partial derivative of the total energy with respect to the charge density integrated times any variation of the charge density that integrates to 0 because we are conserving the total density of the system. Also integrates to 0. This requirement means that the partial derivative of the total energy with respect to the charge density is only a constant and doesn't change in space. So when we compute the derivative of the total energy with respect to the atomic position, we have in principle two terms. One term is the one that we are using. It's the one that is present in the Hermann Feynman forces. Plus, the second term then instead is of this type is the possible contribution to the forces that is given by the variation of the charge density when we move the ions. This term integrates to 0 because we have this condition. So the charge, the functional derivative of the total energy with respect to the atomic density is a constant. This fact allows us to compute the forces with just as an expectation value. But the problem is that this fact is true only if we are at full convergence. And this fact is excellent. So our forces are accurate only when we are very close to convergence. Otherwise, obviously, we have an error on the forces. We have an inaccuracy on the evaluation of the forces. And the problem with dot x actually prints an estimate on the value of this red term on this possible inaccuracy in the evaluation of the forces. And it does it with an approximate correction that was proposed by Sean and others in the paper of 1993. And for doing this, we have an estimate on the derivative of the charge density with respect to the unit position. And this one is estimated using a superposition of atomic charge densities and computing the derivative of this one with respect to the atomic position, while instead the functional derivative of the total energy with respect to the charge density is estimated using the difference of the self-consistent potential in the latest interaction between the final interaction. In this way, we can compute this integral. And this integral gives us an estimate of the correction that we have to give to the forces. During relaxation of molecular dynamic simulations, the program, if you look at the output of the program, you will see that it prints out this estimate. And if the inaccuracy that the program has estimated is comparable in magnitude with the forces that are actually calculated, it raises a warning. This problem may actually arise many times when we are performing structural optimization. Because obviously, when you are performing structural optimization, you arrive at a point where the forces are very small. And so you need more precision of the forces. The program usually progressively lowers the threshold for the self-consistent convergence. So it tries to avoid this problem. But it's always good to look at this evaluation and see if any issue is raising there. Because especially when the forces are very small, there is always the possibility to have an inaccurate evaluation of the forces. And so all the dynamics, all the structural optimization fails. OK, the second statuette, so the second derivative that we are going to compute is the one of the lattice vectors. To characterize the change of the lattice vectors, we use the strain tensor. The strain tensor has been divided to describe the uniform deformation of a principle infine system. And it is a linear function. You just take any point in the space. And its coordinate passes from r to r prime according to this linear relation. And so any change in the lattice vector may be described by this quantity e alpha beta. And OK, so the derivatives. And as forces, we use the stress tensor that is computed as the derivative of the energy per unit volume with respect to the strain. So it's minus 1 over omega, where omega is the volume of the unit cell derivative of the total energy per unit cell with respect to the strain. And it is to note that OK, as it is written here, there is no constraint on the fact that e alpha beta must be symmetric or anti-symmetric or whatever, maybe anything. But it is a fact that only the symmetric part of this tensor affects the total energy because while the symmetric part is actually a strain, the anti-symmetric part of this tensor comes out that is only a solid rotation of the system that doesn't change the total energy of the system. So if you by any chance we are going to do an evaluation of the stress by finite differences, you should be aware that your strain must be symmetric or you have to symmetrize it before computing this derivative. So the stress tensor has in total nine components. So it's given by nine components. There are forms of matrix. And so the diagonal terms of the stress tensor are related to the pressure. And are those that are directed perpendicular to the face. So these are the forces per unit surface that the system that each unit cell exerts on the neighboring unit cells. Together with the diagonal terms that are related to the pressure, we have also the shear components of the stress that are the diagonal ones that express the tendency of the system to have the formation that are parallel to the surface of the unit cell and tend to deform the angle between the lattice vectors. Also for the stress, there is also the stress is the first derivative with respect to an external parameter. So we can also here use the Elman-Feinman theorem. It is a little bit more complicated to compute and it was computed by Martin in the 80s. And in this case, it doesn't depend explicitly on the unit position, but it just depends on the... But it must be computed as a derivative of the kinetic energy operator, the art in the exchange correlation potential with respect to a uniform deformation of the space. So it just involves the components of the plane waves. As I was saying, it involves the components of the plane waves. So it means that the strain actually depends on the basis set that you are using and the variance with the forces that we have... OK, I forgot to mention, in plane waves, the forces have no pull-eye components. So the forces do not depend on the basis set. This is a pro of the plane waves. This is not true for the stress. Instead, indeed, the stress instead depends on the basis set that you are using. This was the thing that Ralph Gebauer also mentioned in the first lecture. If you change the lattice parameters, for example here, and you decrease or increase the value of the lattice constant A, you see that the density of wave vectors inside the sphere of the cut-off changes. So you are changing the number of wave vectors inside your basis set. And when you deform the cell, you are also actually changing the length and the metric of this basis set. And this affects the total energy and it affects the stress. For this reason, the evaluation of the energy and of the stress can be discontinuous when you are using a cut-off that is not a full convergence. So full convergence means that your cut-off is so large that all the plane waves that are close to the border gives a vanishing contribution to the total energy, to the stress, and so on and so forth. This discontinuities may force you so to use a very large cut-off to compute stress and to perform a new perform variable cell relaxation. And there is actually another technique that allows you to avoid this problem or to regularize this problem. And this is the so-called smooth cut-off. When you use this smooth cut-off, what you do is actually to redefine the kinetic energy functional. Usually, the kinetic energy functional is just g-square. So you just compute the for each plane wave, you just compute the square of the modulus of the vector. And that one is the kinetic energy of that plane wave. And but if you rewrite the kinetic energy operator in this way, you are adding another term. This term is OK. Here, we add this term. This term is done by 1 plus the error function of g-square minus a cut-off value and the sigma width. It happens that when g-square is much smaller, is smaller, is less than ec-fix and is far, and they are one further part of the distance that is much larger than sigma, it happens that this function here evaluates to minus 1. So this term vanishes and your kinetic energy is the same as in the standard kinetic energy function. While you are approaching, when g-square approaches the cut-off, instead, you see that this value starts raising from minus 1 to 0 to n10 to 1. And so you are adding an additive penalty term to the kinetic energy. And so you are playing wave in this region here. So your play wave in this region here starts adding an energy cost. And so when you perform the self-consistence, when you perform the minimization of the milltonian, what happens is that these plane waves are the weight. Yes, the weight of these plane waves on the ground state wave functions becomes smaller and smaller. And so their weight changes regularly from some sensible value to a penalized value without having an abrupt cut-off with wave vectors that come in and out from the evaluation of the kinetic energy operator. In PW.tix, this is implemented with these three variables. So you specify EC field that is the value of this orange cut-off here. So you select an additional cut-off that is obviously to be a little bit smaller than the cut-off that you specify with ECADWFC in order that you have a region which is defined by QE sigma, where the plane waves that are in this region have a smooth entering or going out from being significant for your system. And then you also to specify the height, yes, the height, this value here, that is how much you want to penalize the plane waves that are in this region. In this way, you have a more regular evaluation of the stress and you can perform also a variable cell simulation without being at full convergence with the cut-off. So now we pass to the M. So now that we have seen how we can compute forces and stresses, we pass to see how in PW.tix it's possible to use these derivatives to perform structural optimization. You can perform two kinds of structural optimization. One is structural optimization and fixed cell. So you leave the lattice vectors unchanged and to their value and you just relax the atomic position using as gradients and derivatives the forces of the atoms. The second type of structural optimization that you can perform is instead one where you optimize both the atomic position and the atomic position and the lattice vectors. As a coordinate for the lattice vectors, we will use the cell strains. And in this case, the gradients are given by the forces of the atoms, as obviously, and by the stress tensor. You may use two different algorithms to perform the structural optimization in PW. The most important one, the most used one, the default one is the BFGS. So you have to specify an ion dynamics by BFGS. Actually, you don't need to specify it because it is the default value. So if you don't specify anything, the program will assume that you want to use BFGS. When you want to perform VC-relax calculation, you have to specify BFGS and ion dynamics or just leave it unspecified and the program will change. And you have also to specify in the cell, in the cell name list, the same variable for the cell vectors, cell dynamics equal BFGS. There is also another algorithm that is called, that is the so-called algorithm for the damped dynamics. In this case, you have to specify ion dynamics equal damp in the ion name list. Well, instead for the cell dynamics, you have to choose between damp PR, that means damp Parinello-Raman or damp W, for the Venskovich, for the Venskovich damped dynamics. You have to specify these ones because when you perform a variable cell, you have to specify these ones because when you do this algorithm, what you are actually doing is to perform a molecular dynamics simulation with this damped. So damped means that you progressively drain the kinetic energy out of the system. So the forces of the system accelerate the system, so increase the kinetic energy of the system as it falls to the minimum. And in order that the system remains close to the minimum, you just in some way remove the kinetic energy of the system. In quick mean, this is done in a very simple way. You just monitor the velocity of each of the components. And as long as the forces and the velocities are parallel, you just keep the system falling down. But as soon as you see that the forces and the velocities are opposite direction, that means that in that direction, the system has passed to the minimum. So now it's going uphill. You just remove the kinetic energy of the degree of freedom. In this way, the system forgets about the energy that accumulated before and starts going downhill again in that direction. The pros of this algorithm is it's very robust because it's very simple, so it just doesn't make any assumption on the system, on the position of the minimum, or anything like that. And even if the starting point is not close to the minimum, it works. It has also some non-good qualities. So to use it, especially when you do variable cell, when you do it in the variable cell, you need some experience because you need to be able to understand if you want to use the damper parallel of Raman dynamics or the damper Renskovic dynamics to do the simulation. And the other thing is that you need to be able to choose the time step for the dynamic, and also to be able to understand what is the best value for the fictitious mass of the cell. And the other thing that obviously is not good of that is that in most of the cases, if you do the same optimization with BFGS, you usually do it, you usually finish, I mean, it's generally slower than BFGS. So if one usually tries damper dynamics only when he finds that BFGS fails. So now I come to present very briefly how BFGS works. OK, the BFGS stays for the name of the four mathematicians that have published this method, Boyd and Fletcher, Godfarb, and Shanno. And this is the algorithm that is used by default, as I was saying, and it generally works very, very fine. And it belongs to the family of quasi-Newton algorithms. And it starts by an assumption that you are close to the minimum. So means that close to the minimum means that your energy as function of the position may be expressed as the value at the minimum plus a quadratic term on the displacement. So you have the suspects by a. So you have to, you need to make an hypothesis on how this quadratic form is done. So you have to create an hypothesis of the SCI matrix, the second derivative of the total energy. And along your minimization, you have to update this guess of your SCI matrix. And actually, the BFGS stays for the algorithm for updating this SCI matrix. The algorithm works in this way. You just compute the forces. So this is just the gradient of the energy. And then you just compute the updated position using the Newton-Raphson step. So you just say that your gradient as step 1 minus your gradient as step, sorry, the gradient, you are looking for a step 1 for which the gradient minus the gradient as step 0 is given by the action times the position as step 0 and the position as step 1 that you are looking for minus the position as step 0. And imposing that the gradient as step 1 is 0, you find that your search direction is given by x0 minus the inverse of the action that multiplies for the gradient as step 0. This means that suppose that you have good guess for the action, you are able just to know in the position and the forces step 0 to have a good guess of the position step 1. In practice, you don't do this. You just use this information here. It is this one as a guess of the direction in which to look for the minimum. And you set the new position as the old position plus trust radius times the direction where you want to go. So in the following steps, then you have to update your information and then you have to update your inverse action matrix. The update is quite complex to tell. It is proposed in this formula and is a function of the search. OK, by default, it is like this. So by default, it is a function of the search direction plus the change in the lattice vectors. So plus the change in the gradients in the two lattice steps. And you have an update of this kind. This is actually the BFGS scheme for the update in ETH. And there are also other schemes. Also in Quantum Espresso, you may select there is a variable that is called BFGS and DIMM, where instead of using only the latest step, you can use also more steps from the search. But obviously, this assumes that you are actually in a close to a minimum because the more steps you are using, the more you rely on the fact that you are close to a quadratic minimum. The other thing that you have to do is to evaluate this trust radius. To evaluate the trust radius at each step, you just do two checks that your trust radius satisfies the two valve conditions. The first valve condition is that the new energy must be lower than the old one plus W1 times the trust radius that you are currently using times the gradient times the search direction. W1 is a variable that you can set in the input of pw.tix and it has a default of 0.001. So it's very small. And this condition is usually verified any time that your energy is lower than your old energy. Then you also to satisfy the second condition, this is the curvature condition that instead tells you OK, this condition checks that you are not going to far. So if this condition fails, it means that this step, it was too long and you went too much over the minimum and you missed it and you are on the other side of the minimum and you are getting wrong for that. The second condition instead checks that you are not being too close to the previous minimum and checks that your slope is decreased enough. So it means that this is an estimate of the slope in the new step. This one must be larger than W2 times the estimate of the slope in the previous step. In this way, you are checking that your step is not too short and in case the thrust radius is increased in order to have faster convergence. This is the algorithm. The generally works fine and it has anyhow some problems in some cases. So the first problem is that it's usually converges to the closest minimum. So the minimum you have the initial position, you have a basin of this position and the structural optimization will fall down to this closest minimum that it may not be the global minimum. It may not be the global minimum that you are looking for because if there is any potential barrier that prevents you from going to that minimum, you will not go there. But I think that Tonya will show you other methods to find global minimum, to find out to overcome potential barriers. And then the other thing is that the structural optimization is anyhow constrained to stay at the same crystal symmetry that you have at the starting point. So this is sometimes a good thing because you are actually looking for a minimum that has the same symmetry that you have imposed to the system for starting. But the global minimum may actually break this crystal symmetry and your structural optimization will not be able to find it. Actually, there is also the opposite problem because sometimes you are actually looking for a minimum within the constraint of the symmetry that you have imposed to the system. But due to numerical noise, sometimes the symmetry is broken and the structural optimization gets a little bit lost and starts deforming, for example, your lattice vectors. And you may lose your structural optimization. In this case, it may fail because you are for numerical noise breaking the symmetry. The other problem is when you perform variable cell optimization, the algorithm is done assuming that you are starting lattice vectors are not too far from those that you will get at the end of the optimization. So for reasons of efficiency and regularity of the evaluation of forces and stress, usually the basis set is kept fixed. So you start the simulation, the program checks the cutoff, checks what are the mirror indices of the basis set that are within the cutoff. And those are the basis set that you will use for all along the structural optimization, the variable cell structural optimization. What may happen is that if your lattice vector changes significantly, what happens is that this basis set may be significantly different from the starting one. So as a final check, usually as a final check, the program at the end of a variable cell structural optimization after it has obtained the final lattice vectors, the program performs a final check. So it just takes the final vectors, recomputes the plane wave basis set, and recomputes the total energy in this case. So in many cases, you will find that the energy, in the evolution of the energy in the relaxation is the downhill. So you have the value of the total energy reduces, which reduces until you get a convergence. But then you will find that the very last calculation, so the very last total energy instead, has an energy. This is due to the fact that this last calculation is being performed with the updated basis set. So look at the energy difference between those two. And if they are significantly different, it is maybe worth to perform a further check to see if that one is actually the minimum that you were looking for. So the other important thing to remember is, again, the fact that when you are performing a structural optimization, the forces become smaller and smaller and smaller. And so when you are very close to the minimum, the fact that the so-called self-consistent error that I showed you at the beginning of the talk may become significant and may hinder the convergence in the very last steps of the structural optimization. So now you have seen how to use the forces and the calculation of the stress to perform the structural optimization. And we just dedicated a few last slides to see how instead we can also use the forces to perform a molecular dynamics simulation in Borno-Penembre molecular dynamics with PW. So the principles are the same as the classical molecular dynamics. You have a kinetic energy that is given by the usual term minus the potential energy that is defined by the values of the atomic position plus if you are doing variable ceremonial dynamics, also the parameters of the cell. You write with these two terms a Lagrangian. You apply the usual Lagrange equations and you find your equation of motion that are actually the usual equation of motion with the usual equation of motion. So that you can use to perform molecular dynamics simulation. To actually perform molecular dynamics simulation, you have always to discretize it. So you have to pass from continuous time to this discrete time step. In PW, there are available two Verlet algorithms. The first one is the standard Verlet algorithms, where you have standard Verlet algorithms. But you can also use the Verlet diversity Verlet algorithm that both of these two methods allow you to compute, to do molecular dynamics simulation in the micro-canonical ensemble. So it means that constant energy and so that you can perform with these two simulation constant energy where the total kinetic energy plus the potential energy is in principle conserved. Obviously, the conservation of the total energy depends on the accuracy of forces. And one thing that is important to set up is to use a small time step. The time step should be something that should be 100 to 110th of the period of the fastest phonon or vibrational mode in order that obviously not going to fast with the dynamics. And then the other point that was mentioned is that you need forces that are accurate enough in order that the energy is conserved. Otherwise, you will find that your dynamics will have a systematic drift due to the fact that your forces are not computing accurately enough the derivative of the energy. So you are accumulating error upon error. And in the end, you have a drift of the conserved energy. For these reasons, performing molecular dynamics simulation with PW is much more expensive than what you can, for example, we'll see in the lecture of Day 8 where you will see that it's a little bit more convenient to perform caraparinella molecular dynamics instead of bernum rimer molecular dynamics. And this is all. And thanks for your attention. Thank you very much, Pietro. I don't know, Alessandro, sorry. I wanted just to mention the people that are following in streaming. Unfortunately, today we made a mistake. We activated the chat later. So if you want to make a question to the speaker, please reload your YouTube page. And you will find the same page with the chat loaded. Thank you, Pietro. Thank you for the nice talk. OK, so thank you, Pietro, for a very nice talk and so detailed. So the session is open for questions. For those who are attending on Zoom, they can raise their hand and we can unmute them. Do you have any questions? OK, so let me read the question from Zalak's channel. So how can we show that the optimized structure is a global minimum? Because in principle, we can be trapped in some local minimum. That's true. The problem is that you need to, OK, depends on the system, obviously. If you start from a position that is close to the minimum that you were looking for, probably you will end up there. And you may never be sure that is the global minimum. If the system is simple and symmetric, you may assume that the minimum should stay in a position that has some symmetry. So you can check the various symmetric points and start from those ones and see if they compare the various energy of those systems. For example, what people used to do with defects in silicon in Germany, in semiconductors. In those cases, you started from a symmetric position. You minimized all these symmetric position and then you compare the minimum of all these local minimum and see what was the most favorite one. And then if the system is complex, actually, unless the local minimum is very deep, then if you have many, no, if you have many minima, also the entropy is important, I think. So it's good to find the minima. And for complex cases, I think that techniques like molecular, like metadynamics and other techniques for exploring the energy landscape are much more suited than structural optimization. Structural optimization is for systems where you have some good theoretical reason to assume that the minimum is in a few selected places. Otherwise, if you have to explore a complex energy landscape, it's much better to use more complex techniques than just simple structural optimization. OK, OK. Thanks. Thank you very much. So there is a question from Doris. I think you can ask directly the question. Yes, please. Good morning. Good morning, Iain from Spain. Looking at this interesting discussion of how atomic forces are calculated, I was thinking in a material which is being strained. So experimental scientists often knows that at certain level of strain, the material is going to break. So I'm afraid if I'm studying, for example, some properties which become more interesting as I stretch, maybe the more I stretch, the more interesting are going to be my properties. But at a certain point, they are going to be unreal. So I saw the physics behind this is complex. So maybe this doesn't work in this way. But I think it should be really beautiful to see this effect of the extreme case when the material is broken in the evolution of the atomic forces under strain. But also I noticed you point out that in the case, atomic forces are really small, we have to be really careful. So it's possible to capture this scenario in which our results are going to become real. OK, as for breaking of the material, there is a lot, a lot of work on factual dynamics. That is actually, it was a very active field where the breaking of the material usually happens at a scale that is a little bit much larger than systems that usually are. So you need to set up a machinery that is able to work on a multi-scale scenario, usually, because breaking of the material happens to a larger scale. So you have to combine the computational forces and energy with other methods. Like there was a lot of work done with machine learning techniques, with learning on the flight techniques, so that you perform a simulation. You learn a simulation where you train the forces by ab initiot and for a certain time until the forces are fine, you continue with your empirical forces computed by the ab initiot ones. And I think there is a lot of work on that. It's not, this is a thing that requires a multi-scale approach where you cannot only use just ab initiot, because ab initiot, even if done at scale machines, even if done with 10,000 atoms, even if done with 100,000 atoms, maybe for some cases where the material breaks are still not enough and are cases where the thermodynamics, the disorder plays a role. And so I think that in those cases, it's combining the methods, it's much better. OK, OK, thank you. Thank you. Thank you, Pietro. So some other questions. There is a technical issue. How to calculate the cell parameters ABC when declaring I-BRAV equal to 0 in the input file? So I didn't get the question. How? You said the BRAV equals 0, so you also specify in input the three lattices. And that means that you already know them. And then, yes. And then, yeah, OK. Perform the last question. I mean, if you can vary the. No, maybe I understood the question. So yes. So you have put there that you are using I-BRAV equals 0. You have your lattice parameters put in input in the cell parameters card. And then the lattice, obviously, the lattice relaxation brings out you the deformation in terms of the initial lattice parameters. It may be a little complicated, but I think that the program brings out the deformation of the lattice parameters. So you have just to use the formula that I said here. And it would be possible if you use just this formula with the deformation parameters just to get the new ones. It's OK. It's not directly immediate, but not that complicated, even if it's I think it's much harder to tell you by voice than to do that. But it's OK. So I didn't actually answer. But OK. So some more question. Is there any option to calculate the vibrational frequencies of the structure during geometry optimization? OK, if the action vector, if the action matrix was accurate, yes. But that means that you are actually close to a quadratic minimum. And you don't know that. Usually when you use BFGS, you just set the dimension of the steps that you are conserving to 1. That means that you actually don't trust too much the hypothesis that you are close to a quadratic minimum. And in principle, yes. In practice, people doesn't do that. You just first find the minimum. And then either you compute the action by the phone with the phone on code. Or if you wish, you can also compute by finding difference. So once you are in the minimum, you just displace all the atoms and compute the derivatives. OK, horses. Thanks. Then in Vussy, relax, how to choose which option for cell do free? OK, it depends on what you are looking for. If usually people, OK, for example, if you just want to optimize the eyebrow parameter. So send DM1, send DM2, 3, 4, 5, and C on the lattice parameter. And you have specified your vector with eyebrow. I would go for with the eyebrow option that just keeps you in the eyebrow that you have selected in the beginning and avoids any problem with symmetry and goes straight to there. If instead, you don't know where you want to go and you want to set, you can put it, you can just set the, you can just leave the default. And this relax will find for the minimum by itself. Obviously, as I was saying in the talk, if there are symmetries that in the starting configuration, these symmetries should be conserved. If they're not conserved, it's because of numerical noises. So it's not even sure that it's safe. But OK, but in any case, it depends on what you're looking for. So all those options are for, for example, if you have a system where, if you have a two-dimensional system, you want just to optimize the lattice constant in play, the in-plane lattice constant and on the normal one, you have to use, you have to use the potential, the potential options. Otherwise, if you want to, if you, if you want to constrain the lattice constant in plane, you just optimize the one, the normal one. You have to do the opposite. And there are many of them. They are explained hopefully well in the input PW guide. And it's OK, just that. So I think we can get another question. But probably we have to give the word to Tony. So last question. So can we use some umbrella sampling technique to sample the energy landscape in molecular dynamics? There are used, I mean, there's a lot of people that is working in, there used to be a plummeted, a plummeted interface to perform such tasks. Now there is a very good interface with IPI that I don't know if performs this kind of stuff. But yes, you usually need to rely on external codes that can use quantum espresso just as a quantum engine, force engine, so they just compute the forces and uses and use them. Yes, there are a lot of possible ways to do that, but not just simply using quantum espresso, but using quantum espresso together with some multi software approach. OK, so I think we can stop here because there is another part from Tony. We thank you very much, Pietro, for this night talk and these explanations. So the session is now ready for Anto, Anto that will continue some, I think, new geolasting method. Please. Thank you very much, Alessandro. Now let me first try to share the screen. OK, let's see. So can you see my screen? Yes. Yes, OK. So hello everybody. My name is Anto Gokal. No one stole it to colleagues and friends. And now I will present an app method, which is a method that can, let's say, calculate saddle points and to make a bit of a context for this saddle point business. Let me say that this will be about activated processes and activated processes or processes that are characterized by energy barrier and a typical activated process would be a chemical reaction. Chemical reaction is characterized by bond breaking and bond making. And here you can see a very simple cartoon of a very simple reaction, where on the left side we have, let's say, reactant. On the right side we have products. This in today will be also called initial state and final state. So these are important words for today. So remember that. So initial state is kind of a reactant. Final state is a product. And then when, so in this simple reaction, so we are breaking the bond between the blue and the red atom and a red atom makes a bond with the green atom. And such a bond breaking and bond making is characterized by an energy barrier. Now this, of course, is a very simple plot. This energy barrier here appears like a maximum along the reaction coordinate. But actually it is a set of point. It appears here as a maximum because we use one dimensional projection. And now here we have also some other important terms. So the point of the highest energy on the reaction path is the transition state. And so the difference between initial state and transition state would be activation energy. And this activation energy would tell us something about the kinetics of the reaction. And this delta E that you see here is a reaction energy, which would tell us something about thermodynamics. So I know that, for example, the reaction rate constant is given by our Helios-like equation, which is written here. So we see that the reaction rate constant is exponential in the activation barrier. And this would be a frequency pre-factor. And by transition state theory, this is the formula. Basically, these are the frequencies of the normal modes of the initial state divided by those of the transition state. But for the transition state, the reaction coordinate is excluded because the frequency would be imaginary. And then if I speak about chemical reaction, obviously chemists use catalysts to reduce the activation barrier. And a typical example of catalysis is a heterogeneous catalysis. And here we can see a snapshot how, let's say, a heterogeneous catalyzed reaction looked like. So we have some gas phase species. They adsorb. And then they diffuse on the surface. And then at a given point, they can meet and they can react or they cannot. And then some new product is formed to which diffuses. And eventually, it should desorb if you want to use this product. And this here, you can see on the kind of a potential energy profile along the reaction coordinate. So more or less, all of these steps are activated in a sense. So adsorption can be activated or not. It can even be without a barrier. And then diffusion is also characterized by a small barrier. So when a adsorbate species jumps from side to side on the surface, and then we have kind of bond breaking and bond making. When we make a product species, it diffuses. And then desorption is also characterized by a barrier. Now, obviously, potential energy surface is highly multidimensional. But at most, we can plot in the computer is a 2D potential energy surface, which is plotted here. Which is plotted here. And I will use such a 2D potential energy surface plots quite a bit during this talk. And here we have another important word that it will be used during the talk. And this is the reaction path. For example, this black dashed line, this would be an arbitrary reaction path from, let's say, initial state to the final state. And then this bold black is a minimum energy path. This is a path. This is a reaction path that is the most probable at a zero temperature. And now you can really see that the saddle point here. So this point at the saddle point, this is called the transition state. And typically in literature, what you see is a plot like that, which is a projection of this, let's say, minimum energy path along the reaction coordinate. And so we have again this initial state, transition state, and final state. So remember this. I will use this a lot. And then the activation energy and the energy of the reaction. And the acronyms for the initial state, transition state, and final state are written here. So IS, TS, and FS. And of course, saddle points are unstable configurations. So saddle point is basically a minimum, is basically has a negative, is basically, so is a kind of a minimum in all direction except one, except in the reaction along the reaction coordinate, it appears as a maximum. And this of course is computation in more difficult task to locate transition state than the local minimum. And so here is an example. So this would be this energy profile along the reaction coordinate. And so these points are images, are different snapshots. So here we see initial state. This is nitrogen, nitrogen molecule, that's orbital on the surface. So here is perpendicular. And so then it tilts down and it becomes horizontal and such a movement has some barrier and this here is a transition state. So the barrier is not large. In this case, it is something about 0.3 EV. Now, why is this activation energy so important? I already give a partial answer. It is because it is a very good criterion to tell if a given activated process or a given chemical reaction is kinetically feasible at a given temperature or how fast it is at a given temperature. Now we can see, if you look at the reaction rate or a homogeneous like equation, that the reaction rate constant is exponential in the activation energy and linear in the frequency pre-factor. And so because of the exponential dependence it is really this activation energy is a good criterion. And for the frequency pre-factor, let's say a common value that can be assumed is 10 to the 13 per second. But this is kind of let's say typical value but it is known that frequency pre-factors depend on the reaction type. For example, for disruption, it would be larger. It would be something around 10 to the 16 per second. And if you are interested in typical pre-factors for various reaction, you can look at this literature because Danov has tabulated typical frequency pre-factors for various reaction types. Let me also say that also the frequency pre-factor can be calculated and you will learn in Friday how to calculate phonons. But today this talk is just about activation energy. Okay, now let us just do some example. We have a molecule on the surface whose disruption barrier is 0.6 cV. And now we can ask ourselves, what would be the average residence time of a molecule on the surface before it's disrupts? Let's say at 300 Kelvin. If I plug in the numbers, I would get something about a millisecond with the number of disruption energy of 0.6 cV. If I increase the disruption energy to one every, then the answer would be 10 seconds. And this tells us that, for example, if the molecule wants to persist on the surface at room temperature, then it should absorb stronger than one electron. Now, another important term today is image. And this is jargon. So what does this image means? Image is a given snapshot or is a given configuration of the whole system. And as whenever you will see a point in my presentation, this point does not represent an atom. It represents an image of the whole structure. Now, this is a cartoon that I already showed. So this point would be initial state. So this is a configuration of the whole structure. This point is a final state. Again, a given configuration of the whole structure. So maybe why they name image? Maybe it comes from a movie industry. I don't know. If you imagine a film strip, so this film strip is consists of various images. And this, each image here is a given snapshot, a given configuration of the whole structure. And in this particular film strip, we can see some big pyramidal anion. So here we can see the two epics atoms of the pyramid. And so this pyramid here is rearranging at the end. You see that these two epics atoms became the basal, the basal atoms and this would be, this was calculated by NEP. And this is characterized by some small value. And so remember image is a given configuration of the whole system. Now, obviously, obviously the potential energy surface is highly multidimensional. If we have n atoms, it is three n dimensional. But for obvious reasons, when you visualize, the most we can do is we can visualize two dimensional potential energy surface. And this would be my model potential energy surface for today. And remember, each point here is an image and it represents a given configuration of the whole structure. I will also use this air. This air vector is not a position of a given atom, it is a three n dimensional vector of the whole structure. So of all coordinates of all the atoms. Now, in the preceding talk, you heard about optimizations. So you heard how to find local minima. So imagine that we have a given configuration of a structure, which will be located on the potential energy surface somewhere here. In doing minimization, we will find this local minima because these methods that Pietro has presented, these are methods that find the nearest local minima. For example, for this point, we would find probably this minima. And of course, finding local minima is today more or less routine or provided that the code can calculate the first derivatives. It can be done by steepest descent or any other more efficient method that Pietro has presented in the preceding talk. Now, finding a set of point is more difficult, both from practical point of view because you need to set up, let's say, a more sophisticated input as well as technically. And so when I started the students to do simulations, NEP was not yet invented. And at that time, what we used was a method which can be ascribed as a slowest ascent method. And it was a series of constraint optimizations. For example, it consisted of two main parts. It was kind of stepwise stretching the bond. So you stretch a bond a bit, you fix the bond length and then you do constraint optimization. And then you repeat the process. Now for some reactions, this may work well, but for other reactions, it can fail badly. And here we see a snapshot where it fails badly. So if this is a bond length and this is another degree of freedom, if I stretch a bond, do a relaxation, stretch a bond, do a relaxation, and I continue that, you see I would completely miss the set of point. And so a method that can deal with such problems is a method which means not just elastic band. And here are the references for the NEP. And so what is the main idea of the map is the following. So we take two local minima, let's say initial state and the final state, and then I connect these two minima with the elastic band. And then the recipe is that we relax the band or with orthogonal forces. Now what do I mean with the orthogonal forces is picture it here. So I have this elastic band, a given point, and this is a tangent to the elastic band. And then this would be kind of perpendicular force component and a parallel force component. And so here I am optimizing and relaxing this band with this, let's say, orthogonal forces. And this is the equation, how this is done. So when I do that, what is going on is something like that. So the band will start to relax. We also assume that this elastic band is super stretchable, so there is no corner cutting. And soon or later, we will find minimum energy path, which is characterized by zero perpendicular force components, and then this would be a settlement. But obviously this was only a short experiment because in computer, for obvious reasons, the elastic band needs to be discretized into several images. So I have then several images instead of elastic continuous band. And then I relax these images with the forces that go perpendicular to the reaction path. But there is a problem with this path discretization. Now imagine here, I have this reaction path. These are images. For example, let's say initial state. And suppose that the force components, the perpendicular force components are pointing in this direction that are plotted in this figure. And then in the next iteration, the reaction path would be something like that. And now you can see that this inter-image distance is not conserved. Now you have this L double prime, which is longer than L prime than L. And these can cause severe problems. Okay, one iteration, maybe this is not a large difference between inter-image distance between different images. But after 15 iteration, we can end up with the catastrophe. And so the solution would be to connect images by springs. And this is plotted here. And so we just allow these springs, forces to act along the, oops, to act along the reaction. What has happened? To act along the reaction path. And this would be an equation for the spring force. So basically what we do, we have this tangent, which is along the reaction path, tangent vector. And then we just evaluate the inter-image distance. And here you can see that the distance between these two images is larger than this one. And so here we have a large number minus smaller number and the tangent vector, which means that the spring force would go in this direction and would try to keep the images equal distance. Okay, so now we are ready for the whole map recipe. So it goes like that, connect to minima with a guest reaction path. And often this can be linear, discretize reaction path into several images, connect images by springs, and then minimize reaction path using the net recipe. And the net recipe is the following. So the true forces, that is the minus the gradient of the potential energy surface, are allowed to act only perpendicular to the reaction path. Whereas spring forces are allowed to act only along the reaction path. And this is called nudging. And this is why the method has the name nudged elastic band. And here would be the equation for the force that is acting on a given image. So we have a true force which acts perpendicular and spring forces which acts parallel. And this would be the equation for the two components. Okay, then it turns out that the choice of the tangent. The tangent is basically would be the direction of the reaction path that it is crucial for convergence. And the good choice is a local tangent taken only with respect to the adjacent image that has higher energy. And here would be the equation. So if the energy of the image I plus one is larger than the energy of image I and I minus one, then I just take the difference between the image I plus one minus image I. And then of course I normalize the tangent. And then the most important point in the whole reaction path is the transition state which is the set of point on the map. And so one would hope to have a higher resolution of points around this transition state. And one can do that with the variable spring constants which means that the higher is the energy of the image, the more stiff spring constant it has. And this will make a density of images near the transition state higher. Okay, but even with that, and even if I use many images, I still may not be close to the setting point. And so the solution is a climbing image method where we let the highest image climb up the reaction path and climbing up the reaction path means to inverting the force component that is perpendicular to reaction path. And of course this climbing image is decoupled from the springs. And here would be the equation how this is done. So we basically invert this parallel component to the reaction path and so the image climbs up. Okay, so there is another aspect which is very important. It is the initial guess and the default is linear interpolation. Now it is unfortunate that in this picture which is a simplified cartoon, the linear interpolation really passes very close to the setting point. But in reality, in real examples, this is seldom the case. In quantum espresso, we can use something that is called intermediate image. And intermediate image is used to steer to steer the initial guess of the reaction path. So we use chemical intuition and then we can put, for example, here two intermediate images and our initial guess for the reaction path would be something like that. And then of course the image with the highest energy, which in this case would climb up the potential energy surface. Now let me quickly go to the structure of the NEP input file in quantum espresso. It consists of two parts. One is specific to NEP. This is the path input where we specify these, let's say variables associated with the NEP and the other is the engine input. And these are the keywords and this is the PWX specific. And so this path input consists of path, namely, and also there can be a climate images card, but only when the climate image scheme is set to N1. Now, here I have a bit longer input. There are some variables and now suppose that you are interested of what this particular variables means. So what we do, we go to the NEP description and then we click a given variable and we get the explanation. And now it comes this engine part. Basically here we have, this is the PW specification and these are the mandatory cards and names that are always there. And then it comes the specification of the images, which is slightly different than in PWX. So it starts with begin positions and positions and then we specify first image and last image. This is mandatory, but we can also specify internal. Mediate image and there can be one or many of them. Now here is the explanation of some variables for the NEP input. These are the two string methods. One is NEP, which is the subject of today. The other is SMD and please don't ask me about SMD because I have zero experience about it. I never use it. And so one of the most important parameter is number of images. So how many images we reduce? Then is the Pax threshold, which is the convergent threshold on forces, the number of steps for the optimization. And this means, so in NEP what we do, we do iterations. So first we calculate in iteration one, we calculate all the images. And then we evaluate forces and then we do iteration two. We again calculate all the images. And so this number and step back means how many iterations is there. The optimization scheme is how we will optimize. Which optimizer we use for NEP? Recipe can be Brayden, can be quick main or can be stupid descent. And this Brayden and quick main was already described by Pietro. And for quick main, which is the dynamics, we need to specify a time step. Then we have a climate image scheme, which can be no climbing image, auto, which means that the code will pick the image with the highest energy and will treat it as a climbing image or it can be manually specified. In this case, we need to provide climate images card. And then we can specify variable spring constants and we can use freezing. And this use freezing is, so what it is, it is the best to show the example what it is. And this is shown in the next slide. This is the structure of the NEP input file. So here we see now iteration 10, it calculated image two, image three, image four, image five, image six, image seven. So there was basically eight images. The climate image scheme was auto. You can see here climate image equal four because it has the largest energy. And here you can see this frozen. So you can see that these three images has much larger error than the rest images. And so they are not frozen. This means use freezing. So they will not be frozen in the next iteration because it's better if this convenient first optimize the images that has larger error that are larger away from the minimum energy. And so in the next iteration, the code does not calculate all but calculate just those with the largest error. And then in this next iteration, this image has much larger error than the rest and only these will be then optimized in the next iteration. Okay, now NEP calculations may be relatively difficult to converge. So experience is a plus. And now I would like to give some practical guidelines based on the experience. So one should use, it is beneficial to use chemical intuition to guess a reasonable initial reaction path. And so this entails that you should start from the relaxed initial and relaxed final states so that you know your geometry of products and reactants. And then it is a good idea to use intermediate images. This is where this chemical intuition can be exploited. And so that's why this first last optimization option is discouraged. Because this means optimization of the first and last image on the flight during the NEP run itself. And then the use of intermediate images. So imagine this is an example. Imagine we want to dissociate a molecular surface. So this would be the initial state and this would be the final state. And so if we do linear interpolation, it would be something like that. But this is chemical not sensible because we know that the situation will be something like that the molecule will first around close to the surface and then it will start dissociating. And if I use such an intermediate image, I would steer this initial guess and this would be much better initial guess than this one. Okay. And then many times the question pops up how many images should I use? And the answer is it depends but usually inter image distance in the range from one to two board should be okay. And in order to get the idea of inter image distance, you can do a dry NEP run by setting n step, pet equal to zero. And then the code will just print will just print for you the inter image distance. So you see it takes nothing to do a dry run. Okay. And sometimes there is a question what is this inter image distance? Basically this is a three N, this is a distance between three N dimensional space. It's as simple as that. Okay. And it is a good idea that you visualize reaction path before you do a calculation and this is how you can do it. Okay. And it is good always to start because it's easier from the user perspective to start just one elementary step. It's just one set of points because in principle we can study several set of points simultaneously. Use freezing is beneficial. And then another question, which value of spring constants is not that important. It's not that important defaults are just fine. And another issue with the climbing image method do not activate climbing image from the start because before these reaction path gets close to convergence it can happen that in one iteration it is one image that has the highest energy. And then in next iteration it is the second image that has higher energy. And then this can oscillate back and forth and this is a recipe for catastrophe. And so the idea is that first you need to relax a bit the reaction path and then you plug in the climbing image. And this would be a PWTK workflow how this can be done. So first do no climbing image up to a given threshold and then you restart the calculation plug in the climbing image method and then go to a full contractures. And then the last thing is when you specify atomic structure do not swap the atoms. So the atom indices should be the same for all specified images. And the only way to demonstrate what I mean by that is to show you a real example. And this is now here. So I have, this is the example of today exercise. So it is a cessation of a molecule above the surface. So this is how it goes. So this is the initial path. And now all the indices of atoms are okay. Now here I have the input file. And now suppose I swap two atoms. Now what would happen is this. You see? And this of course is not a process that you want to start. So this is what I mean. Do not swap the atoms. Keep the indices of atoms consistent for all the structures. So that's all. And I think it's time to do, to try in the app during the hands-on session. So thank you very much for your attention. Thank you, Ton. Thank you very much for this very nice presentation with a lot of hints. And it was a good idea to put some self-questions and self-answers here. And actually there was some questions already that you addressed in your last slide. So is there anyone from the audience that want to raise his hand or her hands to ask some question? So there is one from Dorie Esteras. You can ask, please. Hello. Thank you for your wonderful explanation. Also I forgot to thank Pedro de Lugas for also his excellent talk. For me to see how a map between two local minima could be established is something really exciting. So you mentioned that in the case of chemical modifications of the structure, but I'm really curious of if this could be done in something more physical as for example, materials which have different phases under pressure or maybe this is a little out of the topic, but in the case of materials that have different magnetic transitions because it would be really beautiful to see, for example, how a ferromagnet could be transformed in an antiferromagnet state in these pictures. This is a tough question. I don't know whether I'm able to answer it, but maybe it's related to something that Pietro mentioned before. For sure, I mean, I never done this, so I do not have the idea how I would do that. I don't know, I don't know. So maybe Alessandro, you are an expert in modernism. Well, here actually, I don't think there is a method for the map applied to magnetic system, but in some way you can simulate it by using some constrained magnetic moment approach. So in principle, you can locate your initial and final states and you can build a path which connects a path in a spin space so you can try to rotate a magnetic moment by forcing them in some specific directions and then you can calculate the energy along this path. In this way, you may have some kind of transition path. Of course, I'm not sure if you can get the lowest one along the path, but it's anyway an energy profile that you are going to explore. Okay, thank you. I'm sorry, I know that this was a little out of the topic, but for me it would be wonderful to see for example. We can discuss more in the next session for the magnetism. Thank you. Okay, so let me see. I think there is some question from the streaming. We have some more minutes. So any differences between activation energy and energy barrier in calculations, can we use optimized intermediate images to refine transition states? And the intermediate images to be used can be obtained by dry run. No, intermediate image cannot be obtained by dry run. So you have to, like I showed the example with this decision of a molecule, you need to explore to your chemical intuition. You need to have the idea how the reaction would, how this progress of the reaction would go. And then you just construct this intermediate image structures. Now, of course, intermediate image is not a local minima. Maybe perhaps you can do a constrained minimization, but I don't think this is really something worth doing because anyway, intermediate image will be used just an initial guess for the code to guess the reaction path. And then the code will generate these images for the first iteration step. It will do linear interpolate, do a piecewise linear interpolation, like I showed, right? And then these images will be relaxed by a net recipe. So I think it's not me that you do separate, let's say constrained relaxation of intermediate image. Because it's used just for the initial guess, it's used to steer these initial reaction path, and then images will be anyway relaxed using a net recipe. Okay, thank you very much for this explanation. So I think we can stop it here and let's thank both the speaker again, Pietro and Anton. So for those who are attending on Zoom, they can join the Slack channel. And then for the streaming participant, we will recombine at half past 10. Thank you very much. See you soon. Thank you. Yeah, hello to everyone. Alessandro. Charlie, I'm here, I'm here. Hi, hi. Can I just try the sharing? Everyone must try to do your screen, right? Yeah. So this morning, I see now the problem is that if you share your screen, we project it on the arcade. So there's all, okay, you can try now, I think. Yeah. Yeah. More radical things. Because the problem is that now these tests are live streaming, right? Yeah. Yeah, yeah. So anyway, I mean, I'm just hearing and seeing. Okay, okay. But actually on the, because I opened the YouTube channel, so I will see how it looks like. Yeah, do it, do it. You can share. Consider that on YouTube, we take two, three minutes to refer to... So it's a bit delayed. Yeah, because when I came this morning, I had to reboot this machine. It was stuck. I hope everything will go fine. So here it looks good. Yes. Yes. So, here we go. So, and then if I go with it, chop, chop. Yeah, so I can do a horrible thing. So I'm always sharing the window, not the whole display. So I can go to a different place to put some on Slack. I can go to different... Yeah, even on YouTube now, you are online. Yeah, I see. And it keeps the... So it seems to be working. I hope everything goes fine. I also noticed that this is a four-year-old Mac, I think four-course iMac. If you don't mind, actually, you can leave this presentation on. So we have a skin saver for the streaming, otherwise we project ourselves in there. So if you leave it, it's better. Yeah. Give me. So I run out of time to make a demo for the remote usage. So please... No, no, it is based on the very, I mean, I'm totally fine with that. Sorry, no, I had to see where do I find the buttons. No. So please let them to pay attention to this, how to remote usage. Yeah. I checked it also myself this morning. So my plan would be... So if no one minds, I will quickly mention this. Also, it's the tutors are listening. So I looked at the presentations the last days and I decided that at the beginning, I will try to lead the session so that we do the exercises. And when they come to the more complicated ones, then I would go quickly and let them do with you tutors, the other... Yes, we also have to remind that if now we use the remote commands. Yeah. If somebody has technical issue, instead of writing in the channel of the day, they should write on the channel of the cluster. Yeah. Okay. Yeah. Which is named the cluster dash, ICTP or let's say ASI. Yeah. Each of the three... ASI. Yeah. It's a good challenge, right? Yeah. So... You can see how it goes, eh? Yeah. You can say these things... Yeah, at the beginning. Yeah, yeah. Yeah. Yeah. Ari. Yeah. Please make a remark that there are some specifics for Arnes and CISA clusters and... Yeah. I think it's better that... I tell it to everyone that on all the clusters, because otherwise people might get a bit confused. I think it's better that they use the remote SQ. And only when it's finished, then they copy everything, not that they need... Yes. Yeah. To make the... Because they were doing the copies to monitor the status of the job. Yeah. And I think this anyway, I mean, it slows down. It can slow down the network or something and... Yeah, it can also mess up the Slorm server, actually, but... I just noticed the remote SQ is funny because... I may demonstrate it. So if I do a remote... Sorry. SQ. Dexay goes to background. Yeah, yeah. It looks like it's waiting for the output, but it is actually finished. Yeah. This is because it goes to the machine, it queries this and then it returns this standard output, yeah. Yeah, and it's in the background, right? It's fogged, it doesn't wait for the return of the command. So this is just to... Not them then, yeah. Press enter, yeah. I thought that first of all, it takes a very long time. They should press enter, yeah. They should press enter. Yeah. And then once the job is gone, then they will... And they will have it, yeah. So I think I hope it's fine. Any other issues? And Tony also, and the tutors, if you have any issues during my presentation, please don't hesitate to interrupt if I forget something. So yeah, my idea was that maybe it's too confusing if we go directly to the remote commands, then we go to the local machine and then we go to the remote. So I thought that we just skip the first slides at the beginning. We do this on the virtual machine, the first exercises, like it was planned, and only vc-relact, the zinc scan, that's the first one to run remotely, right? Please take it away, yeah, please take it away, yeah. So here, if they start, so these two jobs, and of course, this one, then at this point, we would explain the remote usage. So it's a bit going back and forth in the... Yeah, but it's fine, we are all here, so we'll follow up. Yeah, Tony, there are seven questions and answer it on the Slack channel of today, if you can have a look. Okay, so everything works fine. So I leave the slides. Quickly run the restroom. See you in 10 minutes. Yeah. Yes, 10 minutes, yes. Adi, can you hear me? So, since I can use the channel, I see that someone is already running the calculations, the FTD3, dispersion force, and so on. Where? I see on the Slack day three. So you're free to launch the charts, but... Adi, I just wanted to ask if, so we had, if you can just, for the labs, if you can slow down a little bit compared with other sessions. A little, I mean, a little, not much, but... Because, I mean, just for the, because everyone is listening, so... Anyway, there are all the registration and stuff, so it's not a big problem, but... Yeah, so, I mean, to be honest, I have been a bit confused in the program. It's marked two hours slots for the exercises. So I don't know if it was meant that everyone has finished running the exercises in those two hours. And so that the tutors, everyone is prepared to take time off after these two hours because it's very short. Yeah, but we, I mean, we are still following after that. I mean, it's not a problem. Sure, and then... Not the last one, the extra video reservation is up to 4 p.m., so we have, like, four hours more. I think after we can go into breakout rooms and that people have issues can join, so we can leave it open for one hour or two. So for those conditions... This is just, I got a bit confused because in the program, it's written two hours. So my plan was, indeed, to go at the beginning slower and synchronized, and then I see that the remote commands is working for everyone, and then maybe spreading a bit more freely and the tutors can help them more individually so that it would be a bit more individual in the second half of the exercises because it will be, anyway, running PWTK and that was done already yesterday. Yeah, but Esmatics is perfectly reporting. I mean, at the end, what we can do is that we can either leave one link, zoom open, and I mean, if you're following some in the afternoon, we can join that link eventually. Yeah, so I hope that the tutors don't mind hanging. Just a little bit longer. That's what I was most afraid of, that if everyone thinks, okay, two hours and then it's over, then everyone rushes also the buttons, it's very short two hours, I think at the beginning of the learning course. So this is my experience that usually, I would at least need three or four hours to have more time also to think and have more time to look at the output and so on. So I will try to slow down a bit and... Should we start? Yeah. Yes, yes, yes, sure, ready? Okay, so welcome back. Now there is the hands-on session. So the speaker of this session is Ari Pavo Setozen and he will introduce Optimalization, Neb and Automatic Workflow. So these sessions actually there are also other tutors, Matix, Pobernicks, Unmesh Mondal, Felice Conte and Tao Jiang that will assist the participant for the exercise. There are some technical details that I now leave the work to Ivan. Ivan? What? Okay. Okay. Do you want to say something about the slugs and... Yes, I wanted to say that since now we are going to use, for the people that are following on Zoom, since now we are going to use in this section, we start to use the cluster, so we start to use the HPC resources with these remote commands that you find in the script collection. If you have technical problem using this command, please write on the appropriate Slack channel. So if you look at the Slack channel we created, there is one channel called Cluster ICTP Argo, which is the channel for the people that have problem running on the ICTP resources. There is a channel called Cluster JSI Arness, which is the cluster for those that are using the resources at the, in Gubiana. And then there is the Cluster CISA Ulysses channel, which is the one channel for the people that will use the resources HPC at CISA, okay? So if we behave this way, we can keep the request appropriated to the correct channel and we can keep the main channels clean for the scientific discussion. Okay, thank you. Thank you for the explanation and I think Hari, please. Thank you. Hello, thanks to everyone. Thanks to the organizers for giving me the opportunity to guide this session. Best of thanks to Torne, who has been helping a lot setting up this exercise. And I think it also originates quite a lot from the previous edition of the school. So I'm very thankful for also the tutors and thanks to Sano for the introduction. So my name is indeed Arifavo Seitzonen. I'm working currently at the Konoma Superior in Paris and I think my first encounters with Konoma Espresso were maybe during my PhD time, which was last millennium. I mean, still when it was not pre-code. So I've been working or using it quite a long time, but maybe a different aspect than what you see today. So please feel free to post the questions. So as you have learned, please use the Slack channel, preferably the day three. And as Ivan mentioned, if you have any problems with cluster posts on that channel of which you are assigned to the cluster, then principle, everyone of you was already setting up the access to cluster. We were testing that on Friday, day zero. If you have any problems, then please write that on that cluster section. So I guess that the tutors, they are following those questions also during my presentation. So the topic you have seen, it's about structure relaxation and the neighbor. And then we will use one more PDWTK also during this session. So the topics will be the structure optimization. We start with relaxation of the atomic positions only. You will see the examples. Then we go also to the variable cell relaxation. So that will be the easy ones. And this is what I would say most of the time people are doing. Then we go to the net method. And as Tonya showed this morning, then this is, sorry, morning for us, maybe another time of day for you others around the globe. So what Tonya was showing, how we can get the settle points, reaction types and so on. So we will have two very simple examples in this part. And then all over the way, actually already before, we will have examples on using the PDWTK. And we go more and more, let's say more and more, also do the analysis towards the end. So this example four will be about some more specific features of these exercises. And then before we really start, I hope you, everyone remembers to do this GitBoo. So just to make sure that you had the latest version. For example, this document was still modified yesterday to this presentation PDF file. And if you had this modifications and some input files, if the GitBoo complains, so you can try these three commands. So if doing day one or day two, you had some changes. You might change or execute these commands instead of just GitBoo. So you should be done up to date. If you have a performance, if you have any problems, then please report. So the next slides will actually be about something that we will use a little bit later. So I thought that we leave these, we don't go through them now, but in maybe half an hour. So first we go to the simple exercises and then we will use these remote HPC clusters. Like I said, I hope everyone was there on Friday, if not then you managed to get in contact with someone or you managed by yourself to set up the access to these three clusters. So you assigned to one of the three clusters. So we hope that this works for you from the beginning. If not, then we will see with the tutors how to fix your problems. So we will come to these slides a bit later. The first exercises, we will still run on the virtual machines or if you have set up everything on your personal machine, and these are very simple exercises. These should be very fast to execute. So the first one is a derivation of something which looks like the graphene, but we modified it a little bit. So if you have a, well, sorry, I saw you again, if you don't remember, you can go directly to the link on day three, you click and well, the hands on it's there. So you can open the file if you want. You can always, like Tony and others have explained in every directory, there's this readme file and then there are some other instructions here for the remote execution. So I have this PDF file which I was showing open already. So now we go to the first exercise. So if you click, you go into this directory on the file browser, you click on the right, you can open in terminal. So you directly come to this directory. You see, it's the example one. So we start here. And as usual, you can start by looking at the structure of the input files. Now there are four input files, but we start with the graphene one. Actually, there's the graphene one by one, kind of just for reference, but you can have it look later if you want. So I will now launch the code. I mean, I could reduce the dimensionality. Now you see what is the difference. If I don't reduce the dimensionality, then you see some items are above, some below, but maybe it is easier if in this case, sorry, I go back and we reduce the dimensionality. Then it's clear for you. So I reduce the dimension, the 2D, because it's a 2D system. And in the third direction, there's supposed to be enough vacuum so that these layers are isolated from each other chemically. So this is the structure. I will make some copies of it just for the visualization. So the computational cluster will be just four of them, but they show how it looks like. So this is the graphene plane. And as you probably all know, in graphene, you have this sp2 bonding. Now, if you put hydrants, you kind of put the hatch in where there's this lone pair, PZ orbital, so that was the plane orbital. And it is actually quite easy to understand that it's easier or better. It's a preferable energy degree. Put the two atoms alternately. So you put every second of them above every second below this plane of the atoms. Then we start from the flat plane of graphene. And then we will see what happens. So this will be the targets to optimize the structure in this exercise. So that was the structure. Now let's have a quick look in the inputs. I mean, by now you are almost experts in the looking at the input. So now we have the calculation mode is relaxed. So this one, actually, I should mention that in this one, we are still keeping the lattice constants, lattice vectors fixed. This should be optimized. I see there's someone who is writing on the screen. So we should relax also the lattice constant, but this to be think simple, we don't do it yet. Normally, in almost all kind of systems, you want to optimize the lattice constants or the lattice vectors, let's say. So then in the system section, we have a hexagonal system. So graphene, you have seen it already. It's a hexagonal one. So we have, here we used the convention of using the cell Dn variables. So the first one gives the lattice constants. This is in hexagonal case. Now the lateral lattice constant, and this is in Bohr, Bohr atomic units. So this actually something like Q.5, Armstrong, you might believe, if I remember correctly. And then in the other direction, the perpendicular we have, well, hopefully enough vacuum. So this is actually a C over A. So the length is three. In the direction perpendicular to this plane is three times the value of this one. It is probably not enough, but this is to keep the calculations fast enough. So we have four atoms, two types of atoms. You see carbon and hydrogen. And then today we are not going to spend any time on optimizing the parameter. So we have chosen some, if you find various values for the category of energy gate points and so on, you are now experts in choosing them. I must say that these might be a bit too low for some of the exercises or certainly too low for some exercises. Again, to keep the calculations fast enough. So if you would repeat this for real standard project, you would probably test all these and like I said, converge lattice constant and so on. Then we have two kinds of atoms. We are using the ultra short fluid potentials and we had the four atoms in the unit cell. And now we specify them not in the crystal coordinates, but in the alert coordinates. So all the coordinates are now in units of this value, the cell DM1. So you would see that in this case, well, this is how you represent the honeycombs lattice of caffeine. So this is the value, it comes from one over square root of three is the value. And then you would multiply it with the alert and you saw it already in the inputs when we visualized it. So that gives the honeycombs lattice. And the positions are so that one of the hydrants is just above, sorry, just above the first carbon and the second hydrogen is just beneath the second carbon in the unit cell. So this gives this alternating pattern and this actually keeps also the very high symmetry of the lattice. And so then we use K points, like I said, we keep them fixed today. If I may mention some historical remarks, I mentioned my PhD. So actually I was studying the translations of the K points. And now in this case in the hexagonal cell, it's, well, it's not obligatory, but it's much, much better to use the no-shift, so-called no-shift. So that's a historical there was a discussion in the literature why there should be no-shift. And that's actually do we have a more symmetric set of K points? So this means that there's one K point, which is at gamma. And then with this no-shift, you will have a symmetric set of the K points. If you wouldn't, you would use one, one, now there's a vacuum in the third direction. So in principle, it doesn't matter if you have anything between zero, one. But if you would put here one, one, zero, then you would see that the K points, they look a bit funny. So they will be symmetrized, but the origin I, the idea is to have the equidistant grid and that's achieved without the translation. So that's why we have zero, zero, zero, basically always when you have a hexagonal cell. If you want to, you can check my PhD thesis in Japanese is I show how funny those look if you do the translation, I think. And then we use the 991 K points. It's again, maybe not enough for traffing itself for the so-called traffing where you have this hydrogen. It's growing up because you will see how the electric structure is different. So this is the input file very quickly. So then you can pull out the instructions either in PDF file or if you want more detailed ones. So you can have a look in this file. So this is the README file for this exercise. And the first exercise is the commands to execute. And now, sorry, I forgot to mention. So the idea is now in the first exercise is we will synchronize. So we try to keep sure that most of you or every one of you can follow the exercises. Then in the second half, after we have done this remote cluster thing, then we leave you more freedom. You can take your time and you can discuss with the tutors. I will of course be available. But at the beginning, we try to keep you synchronized. So if you have any problems, please write immediately. And the tutors, well, me and the tutors we can check if you have any problems. But this should be very simple at the beginning. So I hope we can keep everything synchronized. So here we did the execution of the commands. Sorry, maybe the big bigger. So this is just now we still run only with one core on the Witcher machine, so host machine. You could, if you have enabled multiple cores, you could run even parallel. I did initially, I ran some of the examples, I think parallel on the Witcher machine. It depends on which kind of computer you have. So I launched the calculation. It's probably very, very fast. So it's very similar to what you have done before. So here it says running on one processor. It means on one core, so there was no MPI. And it was not compiled with OpenMP and all trading and so on. So again, there's the standards, outputs, or the normal outputs you see, the repetition of the most important variables here. And then the lattice vectors and so on, the typical source K point. So something that you have seen before. So now doing the relaxation, the output looks very, very similar as before. You see the system, the electric structure converges in quite a few iterations. So it goes basically exponentially. So we need only eight iterations to reach the convergence. And then the eigenvalues, you see. And then after the first step or the first self-consistency, you see these forces. So these in principle, something new. You might have seen the forces before if they were enabled in the output, but now of course the code has to calculate the forces. And here to explain this morning how they are calculated using these derivatives of the total entry with respect to the atomic positions. And the good thing or one of the great things with the plain way basis that forces, they come almost for free. So by-product, once you have the self-consistency, then it's very fast to calculate the forces on all the atoms. So those, if you do relaxation, those come pretty quick, all the forces on all the atoms. And here you see already that the forces laterally are zero. And the hexagonal cell, the xy, and it's a lateral plane. And these are zero. You could ask, well, how come they're zero? Exactly zero, there's so many digits. Well, of course, the answer is maybe obvious to you. The symmetry is so high that there are no forces laterally. So the atoms in the graphene, they feel no forces because they are symmetric in this honeycomb structure and there's no lateral force. But we have forces vertical. And then on this, in the quantum express world, you know, these are forces, it's also written here. They are in the Routhberg atomic unit. So this would mean roughly that this would mean that if you displace one or this atom vertically by one bore or one atomic unit, the AU, then you would gain or you would lose one Routhberg. And that's quite a huge force. But of course the forces, these initial forces, they depend on the initial geometry. If you start from something which is very close to the final geometry, the forces would be small. Fine. So here now it's written. So we didn't specify anything in the inputs as Peter explained this morning. This means that we will be using the BFGS optimization algorithm for the geometry. And it says, okay, now we haven't done one self-consistent cycle. And this is the zero algorithm or displacement of the atoms using the algorithm. So after that, the atoms are displaced according to the forces, according to the BFGS algorithm. And we go to the next self-consistent loop. And now it converges, well, in seven steps. Again, I can value the forces. You might notice, okay, the forces were 0.2 something. First iteration and now they're 0.1 something. And the second, this was the second self-consistent cycle. And we have accomplished the first BFGS cycle. And we have the new atomic positions. Okay, and we would see that, okay, these forces are still too large. We cannot consider the structure converged. So that's the third iteration forces. Again, smaller 0.0 something and the new positions. And this continues next one, next one, next one. And now actually the forces have reached the convergence criteria here, which I mentioned here. And I think these were the default values. We can, of course, give these in the input. You want the stricter convergence criteria. So you can chase those in the input. And then we have this section, which you see. Now it says something about final. These are the final coordinates and these are the coordinates that are the end result of the relaxation. Now we see the position of the atoms. We will use the, sorry, x-rays then. We select this, now we use the output file, of course. And we had to modify the option from I to O. And again, I reduce the dimensionality. This would be just for convenience how to see the structure. Now you see we have these four options. We can take the initial coordinates. So this would be the same as we're given in the input file. We can use the latest ones. Actually they are, in this case, the optimized coordinates. This is because the calculation has converged. That's because we have this final section around the latest set of coordinates. During the run, one would use this latest one if you want to see the latest one. But actually we can also have a look at all the intermediate ones also. So we can choose the last one and we visualize this. And then you get this animation control. And these six, it says a slide. So these are the different geometry relaxations that you can also kind of display them as a movie. But now we just took them one by one. So this is, it always starts from the initial geometry. So you remember this was the flat one. We started from that one. So if you click on the arrow, a single arrow to the right, so it says next image. So this was the initial geometry, one out of six geometries. You click the next one or the arrow to the right. So next image, you can come the second one, third one and so on. And with this right most button, you come directly to the last current slide, which is the last one, so six out of six. So this is the six geometry and this is actually the optimized geometry. And what you see is that now the two atoms, the carbon atoms, they are no longer on the same plane. And this is what you could also see from the atomic coordinates, the numbers. So they have displaced them vertically. And of course they're hydrogen, they pull out the carbon. And what this looks like is such as a bit more of these cells, how this looks like, it's like in the PDF file. So every second there of these CH atoms is going up and every second one is going down. Now the question will be, why is this? And I mentioned already something about the chemistry at the beginning, so SP3. If you think of SP3 hybridization, that's of course the tetrahedron. Again, in graphene, you don't have anything about below the plane and the carbon takes the SP2 hybridization and it's perfectly fine there. It's a semi-metal. Here we have SP3, so it's like a silicon bulk would be. And so we have somewhat out of plane organization so that we get a bit closer to the geometry like that would be in a tetrahedron. So around each carbon atom you have three. Let me see if I can zoom a little bit. So if you take, for example, this, just a second. So if you would take this carbon atom, for example, and look at how it's coordinated. So it has one, two, three columns, which are slightly below. And those are those, once by this translation symmetry, which are below the plane, when the hydrogen is below, then you remember there were two carbon atoms in the cell. So one had the carbon up, the other one down, and by these bearing boundary cognitions, these are all three around the central carbon atom. And then there's one hydrogen above. So this gives the central carbon atom this kind of SP3 hybridization. So it's close to the perfect SP3 tetrahedron. And then of course, if you would look the same for the other carbon, it says that it's below the plane, but it has the same, it has three carbon atoms above the plane and the carbon, sorry, the hydrogen beneath it. So this would also show up in the electric structure. Now we are not going to look at that. That was not planned for the exercises. But if you remember difference to the input of graphene, now we actually have commented out the occupation. And this is because we know that this is a semi-conducting system or is it even a insulating system? The hydrogen atoms, they kind of saturate these carbon atoms. And instead of the graphene, which is a semi-metal, you remember just the cap closer at the Dirac points. And you saw the density of states, he actually there with open a gap, but we are not going to analyze the electric structure. If you want later, you can do this. You can plot the density of states and so on. Now because we are not doing, using the occupations in the outputs, just quickly repeat that. When we saw the capons, the occupation numbers, these are all occupied states. There are no unoccupied states are calculated. So we don't get automatically the band cap from this calculation. So you have to do a non-selfish calculation or otherwise you have to include unoccupied states. So this is if you want later. So now the idea is to do this relaxation if you haven't launched the job yet. So we wait a while and we hope that everyone has managed to run this example. So if you don't have, if you have not managed let's say we give it two, three minutes, you try to launch this calculation if you haven't done it and then you try to look it in the X-Christian. And please let us know on the Slack or you can also raise your hand in Zoom if you have any problems or if you have any questions or so. So now it's a couple of minutes. Let's take a couple of minutes and you can try to accomplish this calculation. So let me see. I tried to follow. Anyone has a question on? So if someone wants to ask some question you can raise their hand right now. Yeah, so now it's a good time. If you have any questions, problems here because this is the first relaxation calculation. If you have any doubts or so you can ask now. We use a couple of minutes for this. Yeah. And then we go to the next exercise. Of course you can keep asking questions but I think now it's good time either launch the calculation if you haven't done it. If you have any problems, please report us so that we remain synchronized at the beginning of the session. So, and I think Alessandro you can keep track. Yeah, I will check. I mean, I don't, okay, nothing on the chat of the Zoom. It's always good. Anything on the Slack? So there's the, sorry, answer. Yeah, thanks Felice. There's the answer. Felice answered about X-Crysten. Like I said, you launch as usual and we launch the X-Crysten. And the first question it asks do you want to reduce the dimensionality? Here we can do it. Initially actually you might, I mean, you can keep running the example I'm just explaining at the meantime for the others if you have already finished. Initially it might be a good idea to show maybe I will show here. I mean, we can take the initial. Now I don't reduce the dimensionality. And I, well, I show the optimized coordinates. And now we can have a look. And as I mentioned, it's not that interesting to look this direction, but something I think it was mentioned or discussed to some extent. So eternal question is, well, I pose it this way. So even if you have, let's say in the real world you have a two dimensional system, you have one dimensional system, you have a cluster of zero dimensional system, you need to give these lattice vectors in a quantum espresso. And then the question is how much space do you give? It was discussed with some people take six, seven, eight angstroms maybe. And then the question is, is it enough? And it was mentioned that for example, if you have a dipole in your system, the corner is much slower. So you might have to go to these algorithms which take away the dipole. So everything depends on your system. If you have for some ions, like some people have studied the adsorption of ions on caffeine. So if you have this kind of, well, it's a very ionized atom. So it creates a strong dipole if it's only above the plane. So then you probably need much more space also. It depends on very on the system. How do you localize the IO elections and so on. So here initially you can have a look and get an idea how close the atoms are. And I will not give any value. So you always have to think what is enough for your system. So you do the conversion stick. So... Arim, there are some questions on the streaming. Maybe we can get some questions now. If you, so if one geometry does not relax, can we reduce the upscale parameters? Yes, yes. So you can try to make the kind of the steps, at least any steps smaller. So if your structure keeps just taking many, many BFG steps and if it looks like the structure is not converging, then you can try, for example, to reduce this upscale parameter. I checked and I think at 100 it's actually... So he is also asking what is the reasonable values for this parameter? I mean... Well, the 100 is the default. I think I checked. This would not be even needed, but most of the calculations for relaxation that I have done, they have converged with this value. Again, it depends on the system. In some kind of systems, you might have a better luck. Better luck means that you converge faster. So like I said, basically always I have managed to get the converged geometry. If it's a global maximum, sorry, global minimum or local minimum, we cannot say, but BFGS, in most cases, it's converged. And then it's more, if it's upscale, it's more how fast it converges. So if you need 10 steps, 20 steps, there are cases where I have needed hundreds of steps and the default is 50. The reasons in the IN section, you can put the variable to increase this. The other option is to make a restart. So you continue from the previous geometry. What I do sometimes is also, I re-initialize the forces and the auto history of BFGS. So I just copy the coordinates in the new input file without doing a restarts. Because with restarts, if you do a restart in the control section, normally it also restart with the history of the relaxation. And what I do is just say, I start from zero. This kind of forgets the history because I don't know if it was clear. So... Okay, so I think some of the tutors actually can also answer in the streaming chat from YouTube. And some other tutors maybe can answer the questions on Slack while you are doing the exercise so that we can optimize the question session. Yeah, so I hope now everyone has finished this exercise. Again, a few didn'ts. Okay, anyway, someone is suggesting to slow a little bit down. So maybe you can... Okay, so is there someone who has had problems running this example? I mean, just please let us know. So we need know that we remain synchronized. Just write on the Slack immediately that I still have a problem and we can have a look together. Is there anyone? So one one from streaming was, sorry, was saying that getting an error, bad option, P-W-U-I. Sorry, that's in the... Well, I don't know what was the step. I'm writing here in the chat of Zoom. Zoom, okay. No, I just reported to here. Okay. From the... So let me see on Slack day three. And well, maybe I will not start to read all the answers where it's satisfying. With the Vika Zulu, that's the latest one. Is there a way to get the good trade of picking the energy conversion stress or the forced conversion stress? Again, it depends on your system. I'm sorry to say. I'm sorry to say, yeah. So, I mean, you could try, in some case, I might say, okay, I don't care about the energy conversion. I'm only care about the forced conversions or all the opposites. So there's no general... I mean, you can say, okay, most systems, maybe one parameter is 10 times the other one. I think this is what I usually use. I mean, I cannot... I don't have that to handle those values. Usually I try to convert to be stricter than the default ones. And then you will see later during the week, I think in the phone and calculators, you have to relax very, very accurately because a small error in the optimized enemy positions might lead to very large error in the calculated phone and properties of phone and frequencies and phone amounts and so on. And just a general comment. I mean, by the way, I hope everyone else is finishing if you haven't done it already. Very general comments about these threshold criteria and this very general also on day one and day two. So everyone at the beginning asks, what is the value for the conversions? And it was given, I think, one million root per atom, but the really tough answer is it always depends. It depends what you want to look at. So if you want to look at the phone or frequencies, it might be that the criterion doesn't hold. You have to go the stricter convergence in the cutoff energy in the number of k-points and so on. The same is for the forces. There's no general rule. It's unfortunate quite a lot of experience that how accurate do I have to go? Because like I said, it's for the different quantity for if you just want to relax geometry of for example, atom molecule on the surface, it's probably enough quite lose the default values. But if you want to calculate the white pressure of frequencies that molecule, then you might have to converge the forces much better. And then convergence of the energy and forces is of course related, but there's no direct rule that it's one to one or one to 10 or so. Unfortunately, it's mostly experience. So now I hope everyone has finished the first exercise. I see now that there's someone who has been running the next exercise. So that's, so here was the small explanation. There's the image of the system. And then here was the final geometry. So it is so-called buckled structure. So every second atom, two carbon atoms, two hydrogen atoms per cell. And this is then the peric replicas what you see in this image. So every second carbon atom is kind of above the initial plane and every second is below the plane. So this is what we have executed now. So the next exercise is someone more, oops, sorry. I'm confused. Keyboard shortcuts, sorry. So the next exercise is then to run bits more, well, let's say a system with a bit more decrease of freedom. And that's studying adsorption of oxygen, sorry, sorry, sorry. Of course, this is an input file. So the next systems we're going to look at, and I reduced the dimension of the here because it's a two dimensional system. So this is now in the previous example, we put hydrogen and each carbon atom. And now we are looking at the adsorption of in principle isolated oxygen atom on the graphene. And in this case, we have chosen already this supercell. It's mentioned in explaining the, in excite sheets. So we have chosen three by three, both directions three by three unit cells. This is the so-called supercell. So instead of the two atoms that we had in the first case or in the case of pure graphene, now we have 18 atoms. If you would count, you will have three times three, simple unit cells and then there are two atoms per cell. So two times three by three, that gives 18 atoms. And then we have one oxygen atom, which is now above one of the carbon atom. Well, sorry, it's kind of preaching above the plane between two carbon atoms. So I try to use the, this is so-called, well, epoxide, epoxy group. So an oxygen with two bonds to the hydrogen kind of creating this triangle of carbon-carbon oxygen. And this is a model for the graphene oxides. This would be a long, long story. So this is one of the groups which is believed that exist in the graphene oxide. So now we test as the example for today. So it's also enough of this oxygen and how it creates this so-called epoxy group on the graphene. And we chose three by three. Again, it's maybe not enough. If you would have to do a converged check, if you do all the calculations in three by three, well, you could even start two by two supercell, three by three and so four by four. And then you would check when your parameters that you want for some of the adsorption and you have this oxygen, when are they converged with the size of the supercell? So now it would not be converging with respect travel for k-points, but also the lateral dimension of the supercell. This is what you should do before starting this, because again, the oxygen now it's about the plane. So it will create a dipole. It will be partially charged. Of course, it is more electronegative than the carbon. And so and so it kind of creates a dipole and it's partially charged even. So it's kind of this one over R dependence on the distance between the period replicas and so on. Again, this we don't do today, but you should do that if you do a real project. You should be kind of sure that these defects or these adsorbates are far away, theoretically that you are converged. All converged converges in some criteria like some 10 milli-axon volt interaction between the period replicas. Okay, this is the system. This is the coordinates. And if you have a look in just how the input looks like, it looks pretty similar as before. So again, it's a relaxation calculation. Now we had to multiply the lattice constant in the hexagon case by three, because you have three unit cells. And then at the same time, we divide this third cell dimension which was the C over A by three because the A was multiplied by three. So we can divide this C over A ratio by three. So now the distance actually remains the same between the planes. And we have, I mentioned 18 carbon atoms and one oxygen atom. And we have two types, again, because we have the carbon and the one oxygen. And I haven't, I mean, I have tried to pull all the talk so far, but if I missed this, as a reminder for the cutoff energy, if you have several different kinds of atoms, if you have a check the convergence, for example, for carbon, you have check the convergence for oxygen, then you always have the chain take the cutoff for the highest ones of those elements. So now we know from experience, but the oxygen converges lower with respect to cutoff than carbon. So you should choose the cutoff value which makes the oxygen converge. It's just a small slide note, but now for the computational efficiency, again, these values might be a bit too low, but you might want to check those in real calculations. So otherwise the input is more or less the same. Now we do use the fraction of occupation numbers or the smearing because we are studying caffeine. The oxygen disturbs the electron structure, but as you can imagine, there are 16 atoms of carbon which don't have oxygen just close by. So probably this direct cone, it doesn't survive because we break a little bit the symmetry by dissolving the oxygen, but it will probably remain with a small gap to the system or it might even become metallic, it depends also on the initial geometry. Okay, so the coordinates, now actually we use the, or we give the coordinates in Ongstroms, not in the crystal coordinates and not in the coordinates with, like in the previous example where we gave the coordinates as a function of the alert. Now it would be funny to use the alert because from the simple unit cell, you remember we took a three by three super cells. And we had to multiply this parameter cell Dm1 by three and that changes this alert parameter. So we should change the coordinates correspondingly if you're using the alert coordinates throughout. So now they are giving an absolute coordinates in Ongstroms and we have 18 atoms and the 18 carbon atoms and the one oxygen atom. Now there's this remark. We should actually use three by three by three but it's a bit too slow and watch your machine. So we cheat the bits and we use a gamma point only. It's certainly not enough. So yesterday you saw we use a nine by nine by one in the case of graphene. Now that we have a super cell three times three, you can easily verify that's also a good exercise of the folding of the k-points when you go to the cells that the corresponding k-points will be three by three by one. But, and this is what we should be using if you were consistent. But again, to be a bit faster, we use this only the gamma points. Now you can, if you haven't done it already, you can run this example. Sorry, I said that if you haven't done it, you can launch the calculation now and if you have extra time, again, you can have a look at the output. Of course, the beginning is similar. Now it does the self-consistent loop. So, and now you see the system requires more steps for the initial convergence. So in the case of graphene, the previous example, we need eight durations. Now it looks both. And now we actually have also some unencrypted states and because of the smearing, the broadening and the fermions, so one sees that, okay, the fermions is between those two values. Again, this is certainly too little on the gamma points, but this is what we can easily reach on these computers. And again, now we have this smearing contribution because we're using the fractional applications. The forces, now you see that also we could have checked in the beginning. Because the symmetry is now much lower, there are forces also in the lateral directions. So there are no zeros anymore. So the symmetry doesn't keep any of the forces zero. And then we see, okay, the forces on the oxygen vertically, that's the largest one. These forces, initial forces, they of course depend which kind of initial geometry you have given. So that might be a huge one if you have a very bad study in geometry or it could be already very small if you managed to start very close to the final or the converged geometry. So the same kind of thing, it shows the first forces. This was the first self-consistence cycle. And then this was the zero BFGS optimization step. And then the calculation continues. Now we are at step seven. So in my machine, I started it some two, three minutes ago. So this indeed, like it was mentioned in one comment on Slack, this example takes a longer time. It takes longer time already because it has more items and the calculation time, if you would have the same kind of atoms, the same cut-off, it converges. Oh, sorry, the computation time scales with the second third power in principle, the third power is the number of atoms. So if you now put a three by three supercells, so it would be more or less, nine, sorry, there were four atoms in the first case. Now we have 18, so let's say four times more atoms. So if you would have the same k-points, this would be four to the third roughly. So 50, 100 times slower. Now we only use gamma point. But because we have more atoms in principle, the calculation is already because of that, it's heavy. And we are again, as you saw in the beginning, we are only using one processor. So we are following the steps of the self-consistency. Again, we can check the previous one. So 12 steps and it seems that, okay, between the old and the new geometry, the energy total energy of the system is changing already very little. So we hope that soon we will be converged. Again, already 14 steps, but I think with the forces, well, they are reducing. And usually this PFGS, it's at the beginning, it needs more steps because it's far further away from, oh, and it's more steps at the beginning to reduce the forces. And then it starts to learn about the system. Like Pierre said in the morning, it starts to learn how the KCN more or less looks like. And then it can use this information and the forces, they reduce them faster towards the end of the calculation. This is the, sorry, recent values of the forces. Again, they're going down and down and down, but still some forces which are too large for the convergence criterion. So in the meantime, while the calculation is running, just to make sure it has, everyone managed to launch this calculation. Either it has converged or not, but do you have this calculation either running or it's finished? So please again, give a quick comments on the Zoom or on the Slack so that we know that there are no problems running this example. And I mean, it's not, okay, now it just converged. It needed 20 relaxation steps. Again, this depends on your system. It depends on the initial geometry, how far it is from the final one and so on. So we can have a look on the structure again. Well, I go first to two dimensional reduction of the visualization. We can take again an animation if you want, or I go directly to the optimized one. So this is also the latest one to the last 24th step. And this is now the final optimized geometry. And as it would be expected, well, far away from the central oxygen, the planarity is more or less kept. So you see that more or less the carbons are still in the same plane, but then this epoxy group has somewhat lifted out of the plane, maybe also the second neighbors but less, so this is again, it's kind of clear that the carbon, it doesn't want to keep the planar hybridization. So it kind of lifts a little bit to get closer to the SB3, and then probably they are the dangling bonds on the other side. But this is expected that two carbons would also lift with the oxygen. And now if you want, you could have a look for some in the bond lengths between the carbon and the oxygen. So if you choose the carbon, the oxygen, now the bond length is 1.5 angstroms. So that's a very typical bond length. And again, this is in this case of the epoxy group. So by internal symmetry, the other bond length should be more or less the same. And the carbon-carbon distance is 1.45. We can check, we don't have the initial value but more or less far away, it's a bit smaller. Of course, because the coordination here far away or in the perfect graphene, it's the SB2. So the bonds lengths are a bit shorter. We have this hybridization and kind of the electron density around the carbon atoms. And here it somewhat increased the bond lengths because there's the fourth bond to the oxygen now. So that's why the carbon-carbon bond length is a bit shorter. Sorry, it's a bit longer than the carbon-carbon bonds without the oxygen. So this is quickly, this example, has everyone finished this by now? So I see there are some questions also about the chemistry. Tonya has answered, so in principle now, it's a good to note from Josef, Josef Panis. You saw that in the first case, we had solved, we had solved some problems on my machine, some problem on my machine. Can you still hear me? Yes, absolutely, yes. Yeah, because I cannot change even the desktops on my computer anymore. Something has frozen on my machine noise, it's just very, very slow. So I cannot go back to the watcher machine anymore. Okay, there we go, sorry. A good remark. So the first example, let's say the zero example, that was the graphene which you did also yesterday. It is planar, there's no need for this dipole correction. There's no dipole in the vertical direction. As Tonya explained yesterday, if you have two different kind of surfaces, you will have a different potential or the asymptotic of the potential. And in principle, you need this type of correction. And in the first example, this morning, the graphene, we put the hydrogen on both carbon atoms and because of the symmetry, again, there's no dipole across this lab. But today, sorry, today in this example, indeed, there's a dipole, I mentioned that this auction gets polarized and so on. So in the really good calculations, one would indeed put the dipole. So one will put the dipole correction so that it's in between the two layers. So it doesn't affect this kind of discontinuity while a step in the potential which is added in the dipole field. It would not affect directly the electron structure of this structure. So that's why we need to have enough vacuum also that we can put this potential in the vacuum between the planes. And this is again what should be checked. Well, Tonya showed that example yesterday and we saw that there was enough vacuum so that the potential gets really flat and then there's this kind of step, discontinuity, and then it goes to the other side of the, or backside of this lab and then you have a different value of potential. So indeed, here one should use the dipole field to make a very good calculation, a good note. Any other comments, questions about this example? Okay, so we continue. I don't know if someone has a step to area further. Adi, if you have the chance and you can read the streaming, you might get some question there, I don't know. Yeah. It's again interesting to read. Read, read, which question has already been answered? It's not in threads. I see that Matij, they are re-answering. So, it's a bit there. No worries, no worries. So any questions now? I mean, you can also raise your hand. We can stop here if anyone has an urgent question until now. If there's nothing, then we go to the next step. So we will go to the case where we also relax, so let this affect us. To be honest, also in the first case, yeah, we should have relaxed in almost all cases, we should relax the lattice constant. I mean, it's also, there's this customer strain and so on, so we can also relax with an extended strain. So there are a few cases where one doesn't need to relax at the beginning of the lattice constant, so the lattice vectors. So here as a reminder, well, sorry, now we go to a case which looks very similar in the lattices. That is, so that's the HCP structure of bulk zinc. So that's an elementary crystal and zinc crystallizes in the HCP structure. And the HCP is exactly what we are used before. So that's the IPREV4. You see here the value which we are going to use. It's IPREV4, but now it's different in the sense that now the C value, it's not the 2D material, it's bulk material. So in the third direction that the C, and now the C is really significant. We don't just choose it so that we isolate the layers from each other, but the HCP crystal, each of the layers is interacting also in this third direction. So now the C is really a physical value and that's, you will see then that in the databases, structures, the files, you will see a value for C also. And otherwise it's quite similar. So the lattice vectors are similar. Like I said, we will be using the IPREV4 and they are the two independent variables, A and C. Now we both optimized them both. And there are two ways, two rough ways how to do this optimization. One is the manual way, which is noted here. And that's calculating the potential energy surface. So we will calculate several values of A, calculate several values of C, and then we get the two dimensional blocks. And the optimal values are there where the energy is the lowest. So we will, with this algorithm, if you just calculate the grid, we will probably not hit that value, but we could for example interpolate or find a more intellectual way of choosing, of evaluating the exact points where the energy is minimum. And those are the values of A and C that we would have are using the approximations in our calculus. So again, the cutoff energy, the K points and most important, the exchange correlation function. Again, here what we will get as a lattice vectors, they don't correspond to the experiment. And even if, in most cases, even if you would have most converged cutoff energy, most converged K point sets, most converged or best possible pseudovotential or even the hard potential, the Coulomb one or the, one or our potential, you will not get the experimental lattice constant. And often these are not the values that you want to get because there's the exchange correlation approximation. So you should get the values which are best converged with respect to your exchange correlation functional. And if it's away from the experiments, well, that's because of the approximation, but you should not compare, oh, I have better correspondents or agreement with the experimental values. If I, for example, it's a lower cutoff. No, that's a, I would call a cheating. You should just converge everything. And then those are the values. If you can exclude all the other sources of error, tough luck, that's just the correspondence to the experiment. And usually, well, we are using most of these cases, we are using a GGA and it has been maybe seen already. Usually we are more or less one person's larger than the experimental lattice constant. And don't say one person is not too bad. And then that's the value which traditionally one would use. That case is when one uses actually the experimental lattice constant for the real calculation of applications. Here we now try to determine the optimal A and C, these values by two ways. The first one is the manual way. And this is just, as I mentioned, sampling the values of A and C, hopefully so that in both directions, the optimal final optimum value will be inside this two dimensional grid and preferably more or less at the center. And there's already a ready scripts and this we can launch in a while. And this is the PWTK script. You have already experienced some PWTK from the previous days. So this is a way how to calculate. You can calculate that on your local machine. It takes a couple of minutes or we can use the remote computing. We will now go to the remote computing. If you haven't studied that yet, we will start with this example. And the second way is this variable cell relaxation. So that's just almost the same as the calculations before. So when we did the relaxation of the atomic coordinates only, but now we changed from relaxation to variable cell relaxation. The calculation method changed it from relax to VC relax. And now we have to include the cell name list even if it's empty. And I will just show the first example. So the first way to calculate was this two dimensional potential in the surface. There we use the SCF. Actually we could even use the relax but there's nothing to relax. You do high symmetry days. Yes, there's nothing to relax. The coordinates will have a zero forces by symmetry. They will be stress, but if you do relax or SCF calculation, it will give you the same result because the forces will be converged in the first step. But let's have a look in the inputs with the relaxation of the lattice vectors. So again, the calculation method has been changed to VC relax. And we have included the cell name list. That's basically all. One more sites remark which has probably been made but to make sure that you remember it. Now we are calculating the stress sensor. So the stress sensor is needed for the VC relax. So we have to explain that this morning. So this is the derivative of the energy with respect to changes in the lattice vectors. And this is a higher order derivative. If the forces were already derivative, so this is a higher derivative and this actually converges slower with respect to the cutoff energy than the forces. So the easiest to converge with the cutoff, if you increase the cutoff is the total energy. The next one would be the forces and the third one unfortunately is the stress sensor. So usually you have to go to a higher cutoff energy if you want to use the VC relax. And again, this is then you have to test the convergence. We don't have enough time to explain that fully but then Pietro already mentioned that, okay, the last it will be done with the new basis set that was calculated or re-initialized with the lattice vectors of the converge otherwise converged geometry. So you will see that in a while. So this is the relaxation using the variable cell. And sorry, Tony, are you? Yes. Yes. So shall we launch this now on the remote cluster? Yeah, we can try it. Yeah. So this will be if you haven't tried to relaunch anything. So here they are these two steps, the manual step, which is the PW and TK script. And then there's this step where you can directly run PWX with this input and both of these, these can be run on the remote cluster. So let's try our luck. Now if you don't remember anything from the day zero or if you joined a bit later but let's hope everyone has the remote access. We will now be checking that on the flight. If yes, if not. So at the beginning, after the introduction slides they were the two shorter introductions to the execution remotely. So the first one is the remote execution of the executables. So here it's a PW.X. And this is now for the input using the VC Relax because that's a ready input for the PW.X. And if you want to, when you want to run the PWTK is a zinc dash scan dot PWTK you can run that remotely using the remote PWTK command. So how you run those, it's just like before but you don't provide any output. That's not necessary because this output will be created on a remote machine. So for single input file, sorry, you would run remote MTA run then the executable. And now you have to use the dash in option and then the input file. So in a while you will see how this is done in practice. So this is a running MPI run. The second one is remote PWTK and in the name of script. And then in the output you will see in a while in the output you will see, okay, it will be launched on the remote cluster. The next page, then there are some, well, if you have a other kind of script you can use also the remote dispatch but we are not going to use it now. The very important point that Noda has, Tonya, sorry, has mentioned me is that on some of the clusters, I don't give any names here but there's a problem with the files, it's no problem. It's tuned so that you better not touch any of the files, not even look at the output, not even copy them before the job has finished. So we ask you once you have launched your job with this remote MPI run or remote PWTK, you use the remote SQ commands and you can launch it as many times as needed and you only go to the next step. So copying the output file, that's actually on the third page. You only do this after you have seen that your job has finished. These jobs will probably not take very long. I don't remember half a minute or so, maximum. So they will be quite fast. The next example will take, I think, something like two minutes and please don't try to copy any of the files, don't log in on the cluster during that time. This might cause some damage of files. So just be patient. You can have, once you have launched a job for it, somebody can look at the input file using x-rays 10 and 10 every now and then you check the remote SQ. If you still have this kind of line, it says that your job is here to running because it's R and once it has disappeared, your job has finished and then you can copy the data. So please remember this then. Be patient, execute the remote SQ until your job has finished. It could be that if you make a mistake at the beginning, then actually the job, it takes us a fraction of a second on the cluster and then it finishes the meeting. So even then, what you will do, you can copy all the output file or even better, you just use the command rsync underscore from, underscore HPC space and the period, dot. This will copy all the files from the cluster to the current directory. So in shorts, you either execute the remote NPR run or remote PWTK with the corresponding options and then you check your SQ or the Q job if it's still in the Q. Once it has come from the Q, you copy all the files. And I think at this moment, at least there's no need to copy the files explicitly. So this is done automatically. So please don't execute today the HPC rsync PC. This is for your convenience for later. And we don't need the remote S batch today, I think. So let's try that. So we come to the zinc example. Sorry, I went a bit too far. So the zinc example, so the manual way. Let's try the manual way first. So that's a PWTK script, right? So there's this zinc file. So let's try remote PWTK zinc scam PWTK. And we hope for the best that it's working now. Okay, there's just the output. It says something, okay, it submitted the files to the cluster, you don't need to care about this. And then it says submitting the job and it says it using, well actually not process but 20 cores. And that's probably too much for this example but you will see that the next example will be harder than, it's better to use those 20 cores. Okay, we check 30 seconds execution time. So it's running because there's R under the status and it has been running 30 seconds. Now it looks like the output is hanging but just press enter. It's because the job when the SQ job went on the background. So you just press this command and then you can press enter. So it seems that everything is fine. It's running, it's one minute. It's still running, still running. At this point, maybe you can report if you have any problems executing this. I mean, the other commands, so this was this manual way and I think, well, maybe it's better to wait until this has finished when we launched the other job. So it's one and a half minutes now. But then there's this other job. We'll be launching in a while. If you have finished this first step and if you have a look at the output, you can already go and you can try out. So again, I almost forgot. So the other way was this relaxation with the variable self. So this is now executing directly with MPI run. The executable pw.x and the input file is this zinc with VC relax. This you can submit. I prefer that you submit it the only one. I just put this so that I remember. So let's take the Q, it's still running. It's taking more time than I remember. But in the meantime, has everyone managed to launch either the MPI run job or this pw.tk job remotely? So please raise your hands in Zoom. Please write on the slack. I mean, I see also I'm following the stream, but I think, yeah, the interesting questions about the Vandevaas for this and then I think I will come to Vandevaas functionals. And this will come later during the school. So now we are just in the GGA to keep things simple, but it's certainly a big issue in this, in particular when with this rise of the 2D materials or between the layers, adsorbates and so on often or adsorbates on surfaces and so on. And you have often the Vandevaas courses. So anyone has trouble? Okay. I need to quickly check on Argo. So, and then one important note. So I start to see a few jobs on the HPC cluster. So that's cool. Yeah. Good. It means the configuration should be fine. So if you have problems, you can either write on the cluster HPC, ICTP or even here. We can join some breakout room if you have any comments. But I think quite soon I will continue with the example. So if you're in the breakout room, then you might miss the following explanations, but while your tutor can help you further than, or you just follow the PDF file. And so I just want to note, I tell it to everyone because it was a problem I saw it this morning or someone had asked already this morning and now they're doing the exercises. So these remote commands, you run on the virtual machine, not on the HPC cluster. So the idea here was to simplify your life. You don't have it logged on the cluster. This is of course not the real life of a computation scientist. You probably have to fight how do I log in and so on. But we tried to make everything as smooth and simple so that we don't lose time and connecting the cluster and so on. So the idea is that you run all these commands from your virtual machine or if you copy it all the data on your laptop or some other machine, but you don't log in to these three clusters to execute these commands. No one reports any other problem now. So I suppose I hope everyone has done the calculation. So this was for the 2D sampling. The manual way and now I copy all the data. So I use this rsync from underscore HPC and period. That copies all the files if it should. Indeed we get the lots of output files. So we get for each combination of the A and C, we get the input and output file. So there are many, many output files and you see already the energies, actually they have been collected in one of the files. So that's also done on the cluster at the end of this job automatically. That submission script, there's the output from the queuing system but we are not going to look at those. So now this was all copied. I see it copies a little bit too much. I see that it has all written the references. If I'm not wrong, that was not the idea but let's see. So here, as you have seen already in the previous days, you will have some ready-prepared files. So this is a Knublok file and this file directly plots. It uses the values which were gained on the cluster and it plots them here with the Knublok. So you can just take Knublok. There are lots of these options. We are not going to go through them but this will show you these two-dimensional potentials. So you see, you have this A value of the lattice constant. This was the lateral at this constant in a hexagonal cell and on the other axis, you had the C. Now in this contour and the contour with the lines and the colors, you see the representation and you see that the lowest energy, I mean, that has been normalized to zero. So that's the minimum, so that's the most blue point. So it's somewhere here, 2.61 for A and 5.27 maybe for the C. So this would be very roughly with the eye looking at this output and concluding the optimal lattice constants A and C on this calculation. So this is what you get. This you can have a look then with the Knublok in the meantime, we can execute, actually I wrote it already. I mean, this is all in the files, the readme files also. If you want to, then well, now it's very crowded. This, but you have the readme, well, now it's somewhere in between, there's a readme MD so you can have the exit commands there. So we can launch the remote MPI run or this other job also. So it was remote MPI run, then the executable dash in and the input file and you don't redirect the output. Otherwise it's like you will be running indirectly on a cluster. So the remote is just this complicated script which launches the job and does all these steps for you. So we tried to launch that and let's see if I'm lucky. So again, it says the job was launched on the remote cluster using 20 cores. 20 has been chosen actually because I think this is the number of cores per note on that, well, let's say on all the clusters, it's kind of optimal value. And we can check the stages of the job. So now it's running since 24 seconds. And now while it's running, you can, for example, look at the potential surface and so on. Does anyone have a problem so far? Everyone is able to run MPI run remotely, PW, TKE remotely, everyone is happy, everyone gave up. The only thing I think that we should, we could now close the streaming session if you agree. So I would, I mean, I would think, I don't know, look if there are any left questions otherwise, I mean, we'd thank the participation on streaming. I think we can close the streaming and the registrations well and we can stay available for the ones that need help in here. Yeah, so then the rest of the exercise is, you will see. Hello? Yeah, Ivan, you don't need half an hour more for the streams, but we stay here and we close the registrations and the streaming. I mean, we don't need to register ourselves that we chat one another. Okay, there are still examples. We leave the Zoom open, but I don't know, I mean, what do you think? But I think that Ari has still something or some... There are still some exercises to explain, so... Ah, okay, okay, okay, perfect. No, no, sorry, sorry. I thought we were just left to the exercises. No, no, no, I thought we'd take some time because... Excuse me, excuse me, please go ahead. No problem. No problem, no, I mean, we are going a bit slower than initially we had planned, but I think it's good, like you said, that we try to slow down. So we hope that everyone has managed to run remotely. So... Someone has a problem with remote access. Maybe you want to check. Is that on... Zoom. Zoom. I'm all out there. I'll do that. Yeah, I told the two of them to join the breakout room. If you can join the Mondal breakout room, I will be able to help you. Okay, yeah. Help you. Yeah, you can also invite this person. Yes. Is it Amal who has from... If anyone has from, please report now. I mean, you can also write quickly on the Zoom or Slack chat. And then we can use these checkout rooms. So someone will invite you or you are getting an invitation that you can join there if you have any trouble. So in the meantime, well, I mean, later if you are now in the breakout room, but you don't hear me anymore if you're in the breakout room, but if you go to the breakout room, you of course have these slides and you can ask us later how it continues, but I think it's all clear. You continue and if you have any questions afterwards, then we will have to Slack you out. Also later during this week, if you have questions related to this day, we are there to help you. Okay, so in the meantime, the job had finished. I think it took something like two minutes. So this is a bit easier to relax. So I will quickly explain the aspect. So now there's a much less output because there was only one job, there was no loops. Ah, sorry, now I make a mistake. Now I make a mistake using the wrong keyboard shortcuts. I hope we are back to mine. Well, now I see the resolution is much higher. I try to go to lower resolution, so it's weaker fonts and so on. Sorry, this is my mistake. Oh, sorry, it's still too high. You can leave default, but then you set up the resolution in the virtual box. Yeah, because I managed to do it this morning. Just now, for some reason, it always wants to go back to the high resolution. Oh, but you are trying now to set up the resolution within the virtual machine. Yeah, I did it this morning and I think it worked fine, but sorry, I tried once again. Just then I saw that it was okay for my screen. Okay, yeah, sorry, I took the wrong one. No, it again adapts to high resolution, sorry. Just, I don't know why it goes back to this, because then I do the resizing of the screen and it goes back, it keeps on, so I try not to do it, well, I try to do it now. Sorry for that, yeah, I try to stay at this size. Ari, you are now trying to set the resolution within the virtual machine. Yeah, I know. If you do it from the virtual box. Yeah, I don't know, it is a good time now, but this way, actually, I think I tried to survive. So we'll do it later when the people might be waiting for some, some, some, so I hope now, I see the width is not okay, but I hope you can still read my font at this moment. So I did the copy and I think I, no, now it's gone. I just check, well, so now we visualize this output that we got from the remote cluster. So this was the VC relaxed calculation and this is the output. So now we don't reduce dimensionality because this is actually a really deep treatment system. I only show the final geometry. It's not very interesting because the taxi only changes the lattice constant. So this is the HCP crystal. Not much, not very interesting as such. And as I mentioned, the symmetry is so high that there are no internal decrease of freedom. So let's have a look on the, just reading a little bit the output file. And again, how many steps you need? That depends on the, oh, no, no, the initial geometry, how far it's from the final, well, the final lattice constant and the geometry. So I just mentioned here at the beginning of the output, they are the crystal axis in a Cartesian coordinates in units of a lattice. Because this is what is interesting later in the output. So otherwise the output is the same, it just repeats those. Then it does the first self-consistent loop. It shows the eigenvalues at the different k points. So now we are not only at the gamma points, we have quite many, sorry. Then we had the forces and no surprise, the geometry is so high that all the forces are zero. So there's no internal decrease of freedom in the coordinates to optimize. But the lattice constants stay, they need to be optimized. And now we see the value of this stress sensor. So you can print, let it print also if you don't do these, you relax. There's a value input variable in the control section. You can check it up in the documentation. As again in the browser, you can click on the input variable. So the description of the inputs for BW, the peaks, but here it's automatically printed because we are doing the easy, relaxed calculation. And it shows the value of the stress sensor. Actually it shows, here it already shows the isotropic one, but that's the trace. And that already gives you an idea. If it's a very large one, like here, it's in this value is in a kilo bar. So here the stress sensor on the left, sorry, I can't go back these values by symmetry most of them, the non-diagonal ones, they are zero, but the diagonal ones they are in Rydberg by Bohr to the cube. So this is, you can imagine, this is the energy or changing energy when you change the volume. So that's why it's Bohr-cube or the other unit for the length, cube. On the right, you have the same values in kilo bars. If you want to think in another unit suppressor, you can do the conversion. But here is the value, like I said, this is the average value, the trace divided by three of the diagonal. And this gives an idea, if this is a large one, you probably have a large stress still and the calculation will continue. Now again, we do BFGS on the coordinate, there's nothing to optimize. And we do, it's also BFGS by default on the lattice. So it looks almost the same, there's just a presidencer. But what does change then, the code does not only show the new coordinates, it also shows the cell parameters. And I mentioned initially it shows the initial values in the units of ALET, now that's the same in units of ALET, which is this one. And the first one, I think in all cases, it starts from one, it's 1.00000, no matter which system, because these are in units of ALET and ALET is always this first. So you had to multiply all these values by the ALET to get the actual lattice constants. And again, ALET here, it's in atomic bore units. It's not in Ongström, no matter what you gave in the input. Now we see that the value here, it just changed a little bit from one. So it's about 3% larger. So the lattice, that already has expanded about 3.4%. I don't remember the value in the third direction, but it also has changed. Okay, so this was after the first step. And we saw the pressure is about 185 or 186 kilobar. And now I search for this. So after the second step, now we had 60 kilobar. So that's again, the average of the diagonal. So it has reused. And again, we have a new values for the lattice vectors here and the new atomic positions. The new atomic positions, they are very boring because they are optimized in the crystal lattice. Coordinates, which was given in the input. So they are relatively with the lattice constant. This is what we want here because the coordinates will be always this. Again, the forces were always zero because of the symmetry. So this won't change. If you would want to have the coordinates in Ongström for example, in Ongström, you would have multiple IDs with the lattice vectors. And the lattice vectors have been changed to in the relaxation. Okay, we go to the next steps. Again, the value reuses. Now it goes negative. So you might have gone a bit to the other side, but eventually we hope now it's the next steps. It's reducing again. Now it's small, small. And this is now the final calculation, the last step. So this is now what it gives as the final, oh, sorry, I went one, two, five. This is already this extra relaxation that Pietro mentioned this morning. So we had to go to the second last one. Okay, here we are. Sorry for that. So this is the last iteration during the relaxation. And now you see that the value is, well, it's like negative, it doesn't matter, but it's very close to zero. And this is considered converged. So now we have converged both the coordinates, or the forces on the atoms and the lattice vectors. Well, on the forces in this example, it was very boring because it was only, they were always zero because of the symmetry. But otherwise the calculation has converged also with respect to the stress tensor. And now we have again, this final section, the output. So here you see the lattice vector in the hexagon case, laterally it has expanded by 3.35%. This is probably due to the TGA, well, could be also if the cutoff number of K-points is not converged, but those you can always increase. But at the end, you will see that you have some difference with respect to experimental values and well, main part of that is due to the exchange correlation. It could be that the experiments stay also at final temperature. If you are lucky, you find the values which have been extrapolated to zero K. They are the quantum effects, zero point effects and so on. But I would say normally you will have one, two persons, two large lattice vectors at this constants if you use the TGA in normal materials, not the one about spaces and so on. So these are the final values. And this is the final result from the calculation. And now this one we could use, so the ALAT was 4.80. And the lateral, I only take the lateral value. So now use the calculator. So we had 4.80 and then we have times 1.033 something. So this is the final lattice constants from the calculation. And then if you want to get to Ongström, this would be a more or less 2.62. So this would be the lateral lattice constant. The final value in Ongström. And if you remember the manual way, I roughly said from the screen 2.61. So if one would do it more carefully with more time, you would see that these should be pretty close to each other. And the better you are converged with the cutoff and the K points, the closer you are with by these two ways. So normally one is, I must admit, usually one is lazy, one used one method to the other. There are so other ways to do it. And you should at the end get the same result. So if you do it the manual way to two-dimensional, here we have two-dimensional scanning of the total energy, or if you do the VC relax, then if you will have more complicated examples where you have more than two lattice vectors, then usually you have even different angles. Now we had this 120 degrees in the hexagonal cell, but then 90 degrees, so the direct angle between the lateral and vertical. So there were only the A and C, but if you have a more complicated ones, I must admit that most people be included would use the VC relax way. But please be careful, there's this issue with the convergence with the cutoff energy. And this finite basis that was explained by Pietro, so they are tricks and tricks or ways how to try to minimize the error duties. But we don't have time to go into this today. If you have any problems, any questions, so those of you, if you want, at this point, you can go to the next example. This is just a small modification of the previous one. So this again, and you would probably want to launch this remotely. So you see this is a direct input file for the PW.x. So you would use remote MPI run if you want to execute this. This will take again some minutes to launch. And the difference here is that now all the forces with respect to the coordinates, they are not zero by symmetry. In this case, so this is a crystal of urea. So it's actually a molecular crystal. So urea is a small molecule and it has its bonds length and so on. So these all depend on also the interaction between the molecules. So now we would, in this example, you will see that you will not only relax the lattice vectors and the atomic coordinates, the forces are zero and the atomic coordinates, is not the case here. So you will be relaxing both at the same time in this example. But I suggest that we carry on further. We are running soon out of time. And I will now quickly explain the last exercises and then you will independently continue those. Independently, we will have the tutors and me, Tonya will be around, not only until, well, it should end in 15 minutes anyway, but we'll be available sometime. Later, if you have any questions, of course we are available on Slack. So now I will use the rest of my time, I think 15 minutes trying to explain what is left in the exercises. But if you want, if you don't want to listen to me, if you are confident, you can do it without some explanations. You can carry those. Otherwise, it's time that you, it's called you sit on your hands and don't use the keyboard. That's a way to say that you don't try to execute all the commands at the same time and trying to listen. But if you try to listen, I will try to quickly explain the rest of the exercises. And then again, we have the chats and we had the breakout rooms. If you have any problems or any questions. So this is the example two. Example three is an example on the natural elastic band. This was what Tony explained this morning. And this is a very, the first one is very simple example. I will not go into details. It's very funny that even if you have only a few hygiene in terms of this could be very difficult for the DFT to be converged, that's because of some issues. We don't have time to go. It sounds very like a very simple system, but also if you trust this will be horrible to treat with any of the if-change correlation functionals. But now we just believe that everything works. So we use DFT to get the forces. Everything is fine. In the first case, example 1A of the neighbor, we start the calculation using no intermediate image. I mean, Tony explained you can use this intermediate images. And here we just start and I will show you already some slides later. You will see. So we have the initial states. The first image, which is that the two hydrants are close to each other. And then there's the third action a bit further away. The last one is first, we have exchanged the central hydrogen. So it's a very easy to imagine. They are all on a line in this case. Actually it's a one, well, there's a one could say only one time. It's a one dimensional system. Or if you think it's two distances, but if you consider the difference of the two distances, then you will have only one dimension of potential surface. But anyway, this is the first image is the final one. The difference is that you have shifted the central hydrogen from one side to the other one. Now we want to calculate the energy barrier for this reaction. And the default method, if you don't use these intermediate images, just one doesn't linear interpolation. So this would be the central image. Depends on how many images you will be using. You will have some intermediate images between the first image and the central one. And then the last image of the central one. So basically you just interpolate between the firsts and the last one and this image in the middle is just the consequence of this linear interpolation. So the more points you have, of course little by little the hydrogen is shifted because again, linear interpolation is done numerically. There's no DFT, nothing. It just takes the initial coordinates, the last coordinates and this is a linear interpolation. And Tonya mentioned, please don't swap yet on so anything. Otherwise you will get horrible, well, I won't call it any on chemistry, but some very difficult geometries or so. So here, because the first image, finally, or last image, you have the same coordinates for the first and the last actions, they will remain the same in this linear interpolation. Tonya, the central hydrogen is shifted. So this is the first example, example 1A. Again, the examples, the inputs and so on, they are given. And here we use, yeah, seven images. And what you will get is then once you run the examples and crew plot, you will get the plot like this where you get the reactant barrier. So you go from the initial state, zero to the final state. It's just the reaction coordinate is given in arbitrary units, actually it's a scaled units. So you see it goes from zero to one and it's symmetric and that's, we saw it already. This, if you have H2 first on the left and the right, it's the same. So that's also why the potential surface will look symmetric. The second one is a kind of numerical exercise. This is now using the intermediate image. To use this chemical into addition, we can imagine, okay, and the real reactions, the first, the last hydrants, they don't want to stay where they are all the time. And this is the initial configuration that you will get using just this linear depilation. That's probably in the intermediate one, you will have something like, well, the symmetric molecule when the three hydrants are close to each other. So again, the initial or the first hydrogen, it has gone a bit further with the central hydrogen and the action initial on the right has come a bit closer to form this kind of image. And so this, you can insert using this intermediate image and then you get, you give the system or the computer code kind of more information than just the linear interpolation. And sometimes the linear interpolation doesn't even work. You might have a reaction. For example, you want to, you have an atom which goes from my one hand to the other one, but there's an atom in the middle and you want to see the diffusion above it. Now, if you take the linear interpolation, you will kind of shoot. You have a direct line in the linear interpolation and the two atoms would come very, very close. So in this case, the linear interpolation would not work and you have to go to something like intermediate image. So it depends again on your system, you might be forced to use this. In this case, you can calculate also using this linear interpolation. And the idea here is to look at the difference. Well, you should get the same potential in the surface at the end, sorry, here independently, but it says that the computation, it's more efficient to you to give this intermediate image. So you will check how many steps you needed to get to the converged potential in the surface. This is the idea of the first exercise. Then the second one is again on net calculation and again, here we will use the intermediate image. The physical system is H2D association on the metallic surface, the so-called 1-0-0 direction of the aluminum surface. Aluminium, you have seen the example here. Before in the earlier days, it's a FCC metal and then we cut the surface along the 1-0-0 direction. And now we want to see what is the energy barrier for the H2D association? Is there a barrier and how high is it? And what is the initial state? Well, that's H2 in the vacuum. And then it would slowly approach and what would happen chemically, but you will see also in the movie or in the visualization when you visualize the structure in the intermediates. And the final states, the association is that there are two actions which are on the surface. Depending on the reaction they might, this is it and remain quite close. It could be that initially they already dissociate and go further away. What usually happens is that they will stay quite close, but then also due to the diffusion, then the other atom will diffuse further away from the initial one. This is now quite simple. You would say it's only two layers. Also laterally the cell is too small. This was just to keep the calculation originally fast. So again, this would not be sufficient for a real scientific project, but you will see the idea. You will get the idea from this example. And then there's the scripts, PWDK, and then the commands, how to run this example also remotely. And then you will plot this potential surface again and you can visualize this, also this intermediate ones. So you will see what happens chemically. And then later, if you want to continue, we can discuss also how it looked like. And if you have questions, yeah, what happens? Why does it look like this? We can do this later. But idea is now I just will give you a quick glimpse what you are going to do. And then you execute them. And then if you have questions later, you will have us probably explain it. Then there's actually now we have a short tutorial on PWDK. Thanks a lot to Tony for contributing his wonderful software. And we will use it further, also do more complicated quantities. You have done already this conversion test and so on, using this very nice way of creating loops in tight PWDK scripts. We will use that also for other kind of analysis, which will be again more complicated without the PWDK. And the first example is just, well, equation of states. We will be looking how to calculate the equation of state of a material. The second one will be the adsorption of oxygen on the aluminum 111. So this is very similar to the previous one. So this was the 100 direction of the surface, the 111 would be a different orientation of the surface. And we will be looking the potential energy surface of oxygen at different places of this oxygen laterally on the aluminum 111 surface. And then the third one, this is more elaborate and we have already written in the slides. This is left as a homework. If you want, you can continue. I think the reservation will continue. So you can continue during this session. Also if you have questions, there will be actually a very nice chemistry going on. This CO adsorption on metal surface is chemically very interesting. It has been discussed a lot and there are lots of works on this. So it's very intuitive also chemically. And for those of you who want to look it up, so it's the so-called Blyho the model which was initially proposed to explain the chemistry of the CO adsorption on surfaces. If you want to read something more boring, you can read my thesis. This was one of the topics I was studying during my PhD thesis adsorption of CO on I think it was a ruthenium, ruthenium 001 surface. So that's why also I know this, but actually the example doesn't come from me. The previous edition of school. So these are the three examples here in the final section. So there's some explanation of PWDK, but then there will be the explanation how you would run this calculation of the bulk lattice constant of rhodium in this case, because you saw the third example is rhodium surface, rhodium 100. And this is, now you will be fitting the lattice constant and you will get, while using different analytical forms of the potential energy as a function of the volume. But this is what you will get in the outputs. Then the second example was this oxygen aluminum. Again, here we are going to do a loop using PWDK where this oxygen goes along the surface. And here, maybe I mentioned here, it came already during the previous example. If it was not mentioned already, how you can fix atomic coordinates during the relaxation. And also, I think if it was not mentioned, you can fix during the optimization of the lattice vectors, you can focus on fix one of the lattice vectors, two of the lattice vectors, they are different ways. And you can imagine if you want to optimize for example, the lattice vector of a graphene. So you know it's only a one sheet. So not graphite, but graphene, it's one sheet. And then we had this vacuum between the layers. And even if you would take a large amount of vacuum in the vertical direction to separate the sheets, there is always some residual pressure calculating by PWDKs. And then it would start to modify the amount of vacuum. I don't remember if it would finally, it would try to bring the layers together. So there it's actually preferable to keep the layers apart. And the options in the relaxation, it's so-called the cell do-free option, you can look it that up, that's in the cell section in the input for PWDKs. I just mentioned it here because I think it didn't come before. And the same here, I just show here we, sorry, here we insert the oxygen atom. And I show this. So this is then what you will get actually as a output. So you will get the potential energy of the oxygen when it's laterally at different positions along this aluminum one-on-one surface. This is what you will get as an output. But we want to have the height of the oxygen relaxed during this calculation. And how we can achieve this is that we fix laterally the coordinates. And these are, it's not the inputs for PWX, this is PWDK, but it's the same thing. So you give the three coordinates, no matter in which units. And then you can put three values which are either zero or one. And zero means you fix this coordinate. One means you relax this coordinate. So these are three integers. You can leave them out. This means relax all those. If you specify, I think if you specify one of them, you have to specify all three of them. And they can be only one or zero one. Again, zero means it's fixed. And one means that this coordinate is relaxed. Here we wanted to keep the oxygen laterally fixed for each of these position in the 2D scan of the potential surface, but we want to relax its height. I think the substrate it gets fixed just to keep the calculations fast. And this way, well, we come to this. So at each lateral position, we have vertically relaxed. So we have optimized the height and that's why we have only two independent variables. So that's two lateral coordinates of the oxygen. Again, these are the commands that you can execute. It's better to run remotely because this takes quite a while because you have this double loop again and now you do some relaxation and it's a metallic system and so on. So this was the second. And then the third one was this CO at Soaps and on Rodeon 1-0-0. And this, again, it's left as a home exercise. If you have time, if you are interested, you can do it at the end of this session. We are there to help you if you want. And you will see that you will get some other properties. We can get the density differences, so-called. So this is to analyze how the energy, sorry, how the density, electron density changes when you at Soap, in this case, the CO on the surface. This is then a different quantity. This is kind of energy result changing the electron density. So from one end, sorry, this is not the difference, but you can also see the difference this way. So we take energy range and we calculate all the orbitals that contribute to the density in that energy range, because we have all the eigenvalues and all the k-points. So this is another example that you will be calculating and then you will have the projected density of states, which was calculated already before. And then we have some additional, more complicated way of doing the projection. More complicated way of doing the projection. Here we project the wave functions in our system, so in this case, the adsorbed CO on the metal to the orbitals of the free molecule. And this again can be used to analyze how the chemistry changes upon this adsorption. So this is another quantity, which is nowadays implemented. And it's, well, I would say a relatively easy view. So you will see in the inputs, if you come that far, if you don't manage to do it today, no worries, you can do that later. I mean, either later during this week or then later in your home computing cluster or other machine. But this again for the analysis. And then this is, yeah, the molecule orbitals of the carbon monoxide, so this is the adsorbate. And so the projection here, these are the different orbitals. So for each orbital, you would get its own graph and they all depend on the energy. So for each orbital V, you have this curve as a function of energy. If you have any questions on these quantities, again, this is meant as kind of advanced properties. And this also shows the advanced methods that you can use with the WDK. So now it's, I'm gone through all the exercises. Now it's time for you to start playing. If you haven't done it yet, you can start playing. Again, we will be there to guide you, to help you. If you have any questions, comments, I don't know what it would take now. Soon we will stop the streaming. So this is your last chance. If you have any questions at this point, those who are registered, we will remain there on the Zoom and Slack, but if anyone is there. Okay, so, so, Harry, thank you very much for this wonderful presentation with all these things for practical exercise. So I think we can close the session now and we can all meet in slacks to follow the exercise or for any problem or questions and so on. All the tutors will be there and also, I guess, Harry. For those who are streaming, maybe they can ask some questions. They will be also tutored there. So thank you very much and see you on Slack. Thank you. Thank you. Thank you very much. Goodbye. Goodbye, Harry, again. Bye-bye. Thank you, Tom. Thank you again. Thank you very much. Mari, to you. So we will lose the Zoom, right? I don't know. I mean... Ivan. No, no, no, no, we leave the Zoom on. Yeah, yeah, that's what I thought, yeah.