 Okay, I was actually told that some of you have difficulties seeing the blackboard from the back. I don't think there is a solution for this problem unless we tear down the pillar at the center. So you probably have to just move when I move across the blackboard. Sorry about that. Yeah. All right. So let me actually start to continue the lecture by starting from the question that was asked about the face transitions. Oh, you're there in the back. Sorry. Yes. So I was a very interesting question for a number of reasons. In fact, one of the reasons is that I hope I'll have a time to give you a brief description of my own research. And my own research is heavily based on searching for face transitions. So I'm actually happy that this question has been raised. Another reason is that this is actually one of the motivations behind the development of this category of tools, which is now considered as one big category that is tools that allow you to go beyond the micro canonical ensemble. Not, I mean, saying that face transitions don't take place is the micro canonical ensemble is not correct. Obviously, for a number of reasons. I mean, so the question, I guess, that was asked is, suppose, let me, I mean, use again the example of water. You want to study freezing of water. Okay. Can you do it with an envy simulation? Well, you know, it's in envy. You set your conditions. And so suppose you start from the fluid, you start with some configurations and at some point you will equilibrate. Let's assume that your equilibration will be done in a way that you reach a given temperature, which is still a temperature for which you are in the liquid. At that point, of course, the system, no face transition will take place because you have defined your thermodynamic conditions. So in order to see face transitions, you have to change your thermodynamic transitions. For example, you may wish to lower the temperature. Now, low in temperature is something that we've just seen cannot be done straight in a straightforward way. Directly the micro canonical ensemble. So then you will have to consider using the canonical ensemble that is introducing, for example, the special thermostat, and playing with this target temperature, for example, by setting the target temperature initially to a value in which you know that water is liquid and then slowly decreasing this target temperature, hoping that the system will eventually solidify and freeze and lead you to ice. Now, there are a number of problems when you do this. One problem is the fact that freezing, as well as any face transition and condense matter systems, or almost every face transition condense matter systems involves a density change. And immediately see that, although we never talked about the volume variable, there is volume here. And when I say NVE, of course, I mean that the volume is fixed. And in fact, if you think about the scheme that we've discussed so far, it implies that you have a given simulation box, a cube, whose boundaries are fixed because you have to be able to determine interactions with the nearest neighbors. So when I say NVE, I mean the volume is fixed, is determined by the box where the simulation takes place. So if there is a face transition, there is a change of density. In the case of ice, it's not that big, right? It's slight expansion. But it still means that whatever you will obtain if it freezes is not ice at zero pressure. Because if you started with the volume of water which corresponded to zero pressure, you freeze it, you keep the same density. You will find that ice has a positive pressure when it freezes. So is this a correct treatment of a face transition? Face transitions take place when the thermodynamic conditions change when you apply pressure and temperature, not when you keep the volume constrained. So this is one problem. The other problem, of course, is that face transitions have this very nasty property that in most cases they take place through nucleation. 90% of the face transitions that take place in solids take place through nucleation. And nucleation you probably know is a very rare process, right? We can simulate picoseconds, nanoseconds, perhaps in some cases microseconds. And we can simulate the systems that can be a thousand, a million, a billion particles. Well, hoping that the correct fluctuation takes place in your system and that the system will actually freeze depends on this kind of system you have chosen. And it can actually in some cases be something that will never take place. That is, you start cooling down water and you will see that water stays fluid, under cooled down to temperatures which are much, much below zero centigrade. So there are actually two problems when you study face transitions. One is that working with NVE is not the appropriate way to deal with the problem. And the second problem is that in most cases face transitions tend to be driven start from rare events. Now, let me just briefly describe how to go beyond the microcanonical ensemble. In the case of temperature, we've already seen a simple way to go beyond the microcanonical ensemble. We have seen a metal that essentially allows us to work with a canonical ensemble, which is N, V and T, right, in thermodynamic terms. So the question now is can we go beyond this and do perhaps N, P, T, for example. That would solve at least the problem of dealing with the change in density, which is one of the problems you're facing when you study a face transition. It turns out that there are actually metals that allow you to go beyond the fixed volume technique. And, of course, they involve changing the volume. Precisely like in the case of the thermostatic, they involve changing the energy through this friction term in such a way that the temperature is conserved. Similarly, in the case of the volume variable, you want to allow the volume to change in such a way that pressure is the thermodynamic variable that your system satisfies. That is, your system is equilibrated at the given pressure, and the volume has to adjust to the corresponding pressure. So the first thing you have to allow, of course, is for your box to change at least volume. So if you have a box with periodic boundary conditions, well, you have to allow your box to change its cell parameters, the cell parameters of the unit cell, in order to allow some volume fluctuations. In the simplest possible way, you may just wish to allow your box to change isotropically. This is called Andersen dynamics. I'm not going to write down the equations because it will take too much time, but you may immediately see that the equations are going to be very similar to this one, except that the variable is going to be coupled not to the velocities, but it's going to be coupled to the unit cells of the cell, because those are the ones you want to change in order to allow a phase transition. And the other difference is that instead of having here an imbalance between the temperatures, you will have an imbalance between the instantaneous pressure that the system has at the given volume that you can calculate using various theorem or other methods, and the target pressure that you want to achieve. So if you do all this properly, right, in a methodology, which is actually that was developed by, initially, that was developed by Parinello and Raman. Let me mention those names. So the NPT techniques in molecular dynamics was really first introduced by Michele Parinello and Anis Raman. Michele Parinello is going to give a colloquium this week, next week. Next week. And so their method is essentially based on something, I mean, a new variable that coupled to the strain degrees of freedom of the unit cell and where the unbalance is between the instantaneously calculated pressure and the target pressure. So if you want to freeze water, you do a simulation in which you both change, you can change the temperature using thermostat. Keep pressure constant, say, zero, or whatever I mean, one atmosphere, which is essentially zero in the scale of atomic systems. And then you let the system cool down, and at least the density part is taken care of. That is, if the system wants to freeze and wants to expand, the cell will adapt in such a way that the pressure remains constant to one atmosphere during the simulation. So essentially the same methodologies that we use to keep temperature fixed can also be extended and has been extended by Parinello Raman to deal with the pressure that is to change from NPT to NPT. This, of course, doesn't yet solve the problem of the second problem I mentioned, namely the fact that nucleation is a rare event. So you have to wait until at some point inside the sample the crystallization, the transition to ice starts, and this takes place to nucleation. The time scales for that can be extremely long. Now, this opens now a completely new, I mean, subject within molecular dynamics that I was not planning to cover. Is anybody going to cover? Okay. Which is the topic of advanced sampling techniques. Molecular dynamics that you might imagine is a very stupid way of exploring phase space in a statistical system. In fact, it's a very efficient way as long as you are limited to work around a well-defined minimum of the potential energy, but as soon as you have to jump from one, I mean, basing of the potential energy surface to another basing of the potential energy surface and the transition has to take place through an energy barrier. So it's a rare event. Then molecular dynamics becomes less effective, of course, unless you devise ways to accelerate these special transitions through special paths to different basings of the potential energy surface. I have to mention that in this respect, Monte Carlo is sometimes to be preferred, at least if you use molecular dynamics in a blind way without this advanced statistical sampling additional methodology. So Monte Carlo, whenever you have problems dealing with large any barriers and nucleation, for example, rare events, sometimes Monte Carlo is more efficient because Monte Carlo, you can tune your moves in such a way to be more efficient in your exploration of the phase space while with molecular dynamics, you are limited by the real dynamics of your Newton's equations in the exploration of your system. Unless, of course, you introduce some additional forces that force you away from the basing, from the basing of attraction where you are located. And this can be done in a variety of different ways. There is, for example, a very interesting technique that was developed recently actually by Parinello again and people here at CISA, which is called Meta Dynamics, which is essentially based on... I mean, there are thousands of these techniques. I'm just mentioning one. These techniques is very interesting because it's based on the dynamics in which every time you, with your molecular dynamics, you explore your phase space, you leave a barrier behind you. You leave some... You increase the potential behind you, essentially. So next time you come back to that point, you will be repelled because you left behind this potential energy barrier behind you. So essentially, this encourages you to explore a region of the phase space, always new region of the phase space, never go back to places that you've already explored, which is typically the case when you sample a given basing of attraction. At some point after you fill the entire space around this basing of attraction, you'll be forced to jump out of that basing and explore new territories. So that's the spirit of Meta Dynamics. There are, again, thousands of different ways to enhance molecular dynamics, to allow molecular dynamics to explore rare events. I'm not going to discuss them because it would take me at least a few hours to enter into these days. So anyway, phase transition, I hope this is an answer that allowed me to introduce also different ensembles like NPT, in addition to NVT, and also it allowed me to mention the issue of advanced statistical samplings in the framework of molecular dynamics. Okay, now let me finally go to the second part of my lecture, which is the one dealing with, if you remember at the beginning, we mentioned this, our problem was to solve, to integrate our N, and we assumed that the potential, that is the force, was provided to us by some black box, a subroutine or something. Okay, so now the second part of the lecture, I'm going to discuss how to actually calculate this potential of interaction between particles. Now I will take a sort of historical approach to this, which is not probably pedagogically the best one, but it's actually useful because it allows me to get to the final part, which is the ab initio molecular dynamics, which I hope I will have just the chance to mention even for a few minutes. Okay, so let me now rewind, and we go back to the 60s, actually to the 50s. In fact, you've probably seen some of these papers listed in Antonello's introduction. Some of them we actually dealt with the beginning of molecular dynamics, where people were using the first computers, and therefore they were bound to use extremely simple functional forms for this potential of interaction. So if your goal is to simulate a liquid, for example, a simple liquid, you have some particles, which are the particles composing your liquid, and you come out with a very simple expression for the force acting on particle i. Or, if you wish, of the potential of interaction between particle i and all the other particles of the system. Now the simplest way to write down an interaction between a particle and another one, that is to write down this v, is to say that this is a sum of pair interactions. Sorry, yes, one half sum i different from j of some phi, which I call pair, of interaction between particle i and particle j. For example, when you're studying phonons in solids, that's exactly the approximation you're using. You're assuming that each particle is interacting through springs with nearest neighbors. In a more complex system, in a fluid, this potential of interaction between two particles is going to be a bit more complex. In fact, it's not difficult to come out with some simple form, at least the shape of this curve, of this phi 2, I guess, right? So this curve has a sort of universal form. Now, this is the interaction between two particles. So at long distances it has to go to zero. There is always, unless the particles are charged, let me forget about charge particles, let's assume that the particles are neutral. At long distances, if particles are neutral, they will interact with something which is minus r to the sixth. Okay, dispersion forces at long distances between is the last contribution to the interaction, the longest lasting interaction between two neutral particles, r to the sixth, negative. At short distances they will have to repel each other clearly because of a number of reasons, right? Polyexclusion principle, nuclei and all that. I mean, two atoms don't like to stick to one another. I mean, to get too much close to one another. Well, what remains is just analytical extrapolation. And so what you're left with is some depth and some typical distance, sigma of equilibrium between two particles, right? I'm saying something which is probably trivial to everyone, right? Simple chemistry 101. I mean, this is generic interaction between two neutral particles. Now, it turns out that I think 99% of the elements in the periodic table, for example, if you take the interaction between two arbitrary pairs, they behave in this way qualitatively. Of course, they will differ in terms of depth of the interaction. For a covalent bond, the depth will be huge. For, say, rare gases, the interaction will be extremely small for pairs of rare gases. They will differ in terms of the distance, equilibrium distance. For covalent bonds, the distance will be extremely short. For dispersion interactions, the distance will be huge. Take two helium atoms. For example, the distance is what? 100 angstroms? Where are the QMC people here? I don't see them. Anyway, it's about 100 angstroms. The equilibrium interaction between two helium particles, right? Carbon-carbon is one point something angstrom. So the distance and the depth will change, but the shape of the function will be more or less the same, with very few exceptions between two arbitrary elements in the periodic table. Of course, this consideration has led people, scientists, particularly in the 60s and 50s, when they started to do molecular dynamics on very simple computers, to come out with some simple analytical forms for this function. The most popular one is Lena Jones. I guess you all familiar with that, right? So there is an energy scale. And then there is a sigma over r sigma r 12 minus sigma r 6. So that's a very simple form in terms of r, has the correct repulsion at short distances, has the correct, actually even the correct power law at long distances. And this function has a minimum, which is somewhere close to the sigma, and a depth which is something of the order of epsilon. Not exactly the same, but I mean just numerical factors. So this is a very simple functional form you can easily use on every computer to solve molecular dynamics in a very efficient way. Lena Jones. And there are thousands, millions, zillions of different functional forms nowadays available of different sorts, Born-Mayer, Exponentials, blah, blah, blah. They all more or less give you this qualitatively, this sort of behavior with some little details that depend on the system, OK? Is this description enough to describe a complex, for example, fluid like this one? Well, you may argue why not? I mean, after all, I'm interacting with you, with you, with you, and I'm interacting independently with all of you, right? So why shouldn't this be enough or close to enough to describe the total potential of interaction between particles in these systems? Let me show you an example in which this is clearly not the case. Let me take two particles interacting with one another, i and j. Now, I'm sure you know chemistry, right? So I'm sure you know that if two particles are interacting, well, there will be some electronic charge accumulating somewhere here. This is the covalent bond, right? We all know that when the atoms are approaching, the electronic clouds are starting to do some business, right, chemistry. So for example, there will be some sort of accumulation of charge distribution electrons here due to the fact that, for example, they're forming a covalent bond. Let me now imagine a third particle, k, coming closer to i and j, actually coming in between i and j. Now, obviously, from your chemistry, I mean, intuition, you can easily see that this particle, k, is going to affect dramatically the way i and j are interacting. If this particle, for example, likes to interact with i or with j, some of these electrons will immediately have to move and form a bond with k, right? So the way i and j interact with one another, if you think about it in terms of chemistry, is strongly affected by any other atom present in the vicinity. If k is here or here or here or here, it will dramatically affect the way i and j bind, well, dramatically, it will certainly affect it in some way, the way i and j will bind. Now, this, of course, historically has always been, well, historically, I mean, there was a time in which people approached this problem by expanding this to next order, that is, by introducing terms which were of the kind of j different from k of third order terms, right? r i, r j, r k. And hopefully, these terms were correcting deficiencies of the pair potential model. Now, again, in the literature, you'll find hundreds, if not thousands of different ways to describe corrections to pair potentials due to three-body interactions. And you can go further. I mean, you can actually now say, well, if I have a fourth atom here, atom l, well, this is going to affect the way i, j, and k interact because, again, from a chemistry point of view, this is going to displace a little bit of electronic charge and it's going to affect both this one and this one. So then, do I need a fourth? Well, of course. In fact, there are cases in which, I mean, there is a pair to find a theory that allows you to converge this series. Most theories based on this n-body expansions truncate the expansion in an effective way. That is, if they were allowed to go to infinite terms, they would actually diverge. The terms would actually diverge. So it's only through some renormalization that people are now able to include higher order terms in an effective way in two and three n-body terms or two-body terms, in principle. Now, there is, of course, when people started to realize that this was a never-ending story, they started to develop some more clever ways to describe interactions between atoms. And one very clever way to do that is instead of adding third-order terms here, is to say that the two-body part of the interaction depends in an effective way, for example, on some parameter which I will call rho and j, which tells me what is the environment around myself. You can think of this, for example, as, for example, the total number of bonds that I'm forming in that particular moment. Or you can think of it as more complex analytical forms that essentially look around myself and determine some average property of my environment. People have come out with some power laws, complicated sums over the nearest neighbors. You can just think it in terms of number of nearest neighbors, for example, just to have a simple picture of what this rho could be. It could be more complex than that, but it could also be the number of nearest neighbors. That is, the interaction between myself and another particle depends not only on the distance, but also on how many nearest neighbors I have around myself, as well as how many nearest neighbors the other particles have around itself. So this is a way to effectively incorporate these three, four, and five-body contributions to that. In fact, people sometimes call it density, even if it's not actually a density. It's a single parameter that incorporates some information about the environment I'm surrounded that surrounds myself. In general terms, these potentials go under the name of embedded atom potentials. They take different names and just use one very popular one. Because every atom is embedded in an environment which you can characterize by a single parameter. And if, of course, use your imagination and extend this to more than one parameter, there may be just more than one parameter that defines what is my environment and so on and so forth. So these potentials tend to be quite, I mean, they can be actually quite sophisticated and you can expand this into more parameters and to make it more and more accurate. I think this is the right time to start talking about ab initio molecular dynamics because this is actually the next step. The next step is to realize that when you do, if you were able to solve the entire electronic problem properly, you should be able to obtain this number, this quantity, this function here properly. So what is this function? What is this function in a perspective in which you go, I mean, above? I mean, it's a theory which is above. What are we doing here? What are we trying to determine? In principle, we have a set of nuclei at position i, j, and so on and so forth. So I'm trying to see now the problem from an ab initio perspective, right? So without having to derive the potential from some effective approximations. Let's try and introduce the potential and construct this potential entirely from first principles without introducing any empirical parameter, any approximation. What I should do in principle is to say that I have a given number of nuclei at those different positions. Of course, the nuclei will be different. They will have their own charge, right? If this is silicon, the charge here will be plus 14. If this is hydrogen, it will be plus one. If this is carbon, it will be plus six and so on and so forth, right? Well, let me digress. Do I need to know more about the structure of the nucleus? Of course not, right? Dynamics of the nucleus, the energy of the nucleus is something that belongs to a completely different scale of energy and time scales. So as far as I'm concerning condensed matter physics, here I'm only dealing with nuclei as objects with the mass and with the charge with this, which is the atomic number of the nucleus essentially. I can forget about the internal dynamics of it. I hope this is clear, right? I mean, we're all physicists, so we know that there are different energy scales and nuclear physics does not belong to this room at the moment at least. Okay, then I have electrons, right? And electrons are, well, you know, to start with, you can say I have Zj electrons surrounding my silicon. You have one electron surrounding my hydrogen. You have six electrons surrounding my carbon. Of course, in carbon, two of them will belong to the core, six electrons and four electrons will belong to the valence. Similar for silicon, right? Out of 14 electrons, 10 will belong to the core and four will belong to the valence. Hydrogen, that electron is pretty free. So I can start to remove at least electrons in the core. If you know a little bit about chemistry, I mean, this is obvious. Core electrons essentially stick to the core and they are not relevant as far as bonding between atoms is concerned. In any way, this is just a detail. The important thing is that they have a number of electrons in this system. And I mean, the decision to place the electrons on top of each specific atom, of course, is no longer a good way to proceed because the atoms are no longer individual atoms in vacuum. They are now in a condensed system, right? So I shouldn't be talking about Zj electrons here, one electron here, six electrons here. I should be talking about the total number of electrons which is the sum of all the electrons placed somewhere here in the system. And in fact, what I should be looking for is the solution of the quantum mechanical problems for the electrons in the presence of Coulomb potentials centered in the vicinity of the... actually, at the position of the nuclei. So if I'm an electron in this system, I'm seeing these potential wells due to the Coulomb attraction with the nuclei, but each electron will see all of them, right? You will not just see, actually, electrons are indistinguishable particles. That's the way I can tell whether I originally belonged to silicon or to hydrogen when the material was formed. So I have a number of electrons here, each one of them seeing this landscape of holes, and then I will have to find the ground state of this system. So this is a complicated problem. It's a many-body problem for the electrons. Fortunately, we have density functional theory, and I hope you'll see something about it in the next days. I want to cut a long story short. And you may or may not, depending on the level of approximation, come out with a ground state, a wave function which describes the quantum behavior of your electrons, all right? Wave function which will depend in a parametrical way on the position of the nuclei, right? Because this wave function, which is the big wave function of the electrons, if I change the position of this nucleus, the position of this well will move, and therefore also my ground state wave function will have to change, will have to adapt to the new landscape, to the new potential configuration. And of course, if I have a ground state wave function, I will also have a ground state energy, which will be also, of course, dependent on a position of my nuclei, okay? Now, clearly, this energy is exactly the quantity I have to use here. There's only a few things missing, such as the interaction between the nuclei, the coulomb interaction between the nuclei. That's trivial, just coulomb potential. But all the rest that is contained here in V is nothing but the ground state of the electronic, the ground state of this many-body system of electrons seeing this very complex potential of coulomb, I mean, coulomb attractions, coulomb wells, located at the position of the nuclei. Now, in doing this, I made a very important approximation already. I mean, it sounds obvious to many of you, but it is an important approximation, which is the fact that I've chosen the ground state for the electrons. In principle, when I do molecular dynamics, that is, when I start moving my nuclei and let me now call them nuclei, because what I'm actually moving is the nuclei, not the electrons. The electrons move as a consequence of the nuclear motion. When I move my nuclei, I'm assuming that the electrons adiabatically adapt to the local ground state, to the instantaneous ground state that I have to determine every time the nuclei move. Because when the nuclei move, this potential wells of attractions are also moving, and therefore the ground state wave function will change. And the assumption is that, and this is the Born-Oppenheimer approximation or adiabatic approximation, is that the electrons follow adiabatically in their ground state the motion of the nuclei. And of course, the adiabatic approximation is a result of the fact that the electronic mass is much lighter than the mass of the nuclei. So their dynamics is extremely fast compared to the scale of dynamics of the nuclei. By the time the nuclei undergo one vibration in a molecule, for example, the electrons, the dynamics of the electrons as a circulation of the phase of the wave functions, at least orders of magnitude faster than the vibration of the nuclei in a system. So electrons are very fast. As a consequence of that, I can use the Born-Oppenheimer approximation and I can say that this is actually the ground state wave function. And I can use the ground state energy of my quantum and body and electron system to determine the potential of interaction for the nuclei. Now the consequence of this, of course, now if I do this, I'm only making one approximation which is the Born-Oppenheimer approximation. If I accept this approximation, there's no parameter involved in my calculation. That is, the calculation is exact. The potential is exact. There is no further approximation in that except the only one that we introduced is the Born-Oppenheimer approximation. And, of course, before that, I mean the assumption that the nuclei behave as classical objects. But if I accept that the nuclei are classical objects, I accept that the Born-Oppenheimer approximation, this potential is nothing but, with some small corrections, Coulomb interaction, is nothing but the ground state energy of the many-body electron system for that particular choice of the position of the nuclei. Which is, of course, as you probably all know, an unsolvable problem because solving the many-body ground state of a system of even 10 electrons, 12 electrons is already a challenge, not to mention, of course, a system which typically has at least 1,000 electrons because if you have 100 atoms, you typically have 1,000, 2,000, or more electrons in your system. So no way you'll be ever able to determine this ground state exactly or even approximately for a realistic system so you'll have to make some approximations. The most important one is the density functional theory. And don't ask me to go into the details of that because that will take me another hour. I just wanted to flash some constants that you will probably be seeing later on during the school. The functional theory is a very effective theory which allows you to recast the many-body problem into a set of independent one-electron problems which can be solved effectively on a calculator nowadays. So it's a very powerful theory. It's in principle an exact theory if we only knew the form of this functional. Unfortunately, we don't know the form of the functional so there are several approximations to that. So it is an approximated theory at the end of the day to know what the exact functional is. Anyway, this is just to cut the long story short that you can calculate the total energy of a system within some approximations and that total energy enters here and it's the potential that you're going to use for the dynamics of your for the molecular dynamics of interest. This, if you're able to do that, of course, you can completely forget about all these kind of approximations but you just use that that's much more accurate than any functional form expansion of your potential in terms of two or three or n-body terms. Let me briefly sketch some challenges in doing this molecular dynamics with an ab initio I think it's my cell phone. Let me just mention to the use of the exact or ab initio potential in molecular dynamics. Now obviously, I mean the sorry, I should have kept this one. I have this nuclei here I'm going to determine suppose I'm able to determine the wave function, the wave function will be something like that, right, with some so electrons will be of course they will accumulate primarily close to the nuclei because this is where the attractive potential is. So the electrons will accumulate close to the nuclei so I have some approximate wave function for the ground state which, as I said is a parametric form so it depends parametrically on the value of the positions and of course as a consequence that I have in energy. Consider this in the framework of molecular dynamics. Remember the integration, so every time we integrate with the molecular dynamic step we are extracting positions at time t plus delta t which means after delta t this particle will be slightly displaced this one will be slightly displaced this will be slightly displaced and so on and so forth. So it's own displacement due to the fact that we've integrated using the dynamics based on the force which I calculated using now the Abinish potential. But the bottom line is that now I have particles sitting at different positions and of course I have to recalculate my ground state because now the nuclei are at different positions I have to recalculate my ground state and recalculate my ground state energy so you immediately see that I have to repeat this 100,000 times or a million times this is becoming a very serious problem particularly if the system of interest contains hundreds of particles I first have to solve the quantum mechanical problem for several thousand electrons, not only but every step I have to solve it again and I have to repeat this exercise a million times so Abinish molecular dynamics is indeed quite expensive in computational terms. There are some tricks however and some obvious tricks and the most obvious one is that if the atoms have changed their positions by only a little displacement and typically displacement has to be small because otherwise you wouldn't be able to integrate your molecular dynamics well then the potential seen by the electrons with respect to the previous step is not going to be changed dramatically, it's just slightly changed. So the previous wave function is a very good guess for the solution of the new problem. That is, if we use standard methods to find the ground state of function, I mean there are several methods but most of them rely on an initial guess of what the wave function could be for example if you use a variational method you need to start from an initial wave function. Now the previous wave function obviously turns out to be extremely close if of course the displacement is all, okay? So it is difficult but it is not as difficult as you may imagine. For example it is less difficult than if you had than if you were trying to do Monte Carlo with an ab initio potential because in Monte Carlo the moves are much longer. Typically the system changes quite substantially from one step to another one of a Monte Carlo procedure. In molecular dynamics you are always assured that the positions are very close because you are integrating that in real time you are solving a second or a differential equation. So I am just also highlighting some of the differences of molecular dynamics in Monte Carlo when you are using an ab initio potential. Molecular dynamics is sometimes better because it allows a quicker estimation of the energy after a time step as opposed to Monte Carlo where the step is which brings me to a very interesting method that was developed more than 30 years here in Trieste by the way to make this estimate even faster. Let me try to briefly describe this method it is called the Carparinello method let me see let me first draw this curve here. I fix the position of the nuclei the positions the nuclei are fixed and suppose that I now calculate the energy of my quantum mechanical state as a function of the wave function itself I can say called energy psi H Rn psi ok if this is not yet the ground state that is if this is not yet the ground state wave function this energy will depend on psi and I can draw well this is an Hilbert space it is just one coordinate it is infinitely many coordinates but let's try to condense everything into a single one right well this will have typically a parabolic form well it is parabolic if you only have a psi and psi here unfortunately in density functional theory the Hamiltonian contains the density alright and the density is a function of the wave functions so unfortunately in density functional theory this energy as a function of the wave function is not just the quadratic form it's likely different with the spectro quadratic form but let me assume that to first approximation of a parabolic form and that you in some way are able to determine the ground state I'm now moving in the space of wave functions this is the expectation value of the wave function of the Hamiltonian that I want to work with of which I want to determine the ground state so this is the Hamiltonian that contains all this well I mean all these holes at the position of the nuclei the nuclei are fixed with the ground state wave function in some way right I can do a variational principle well I can even in principle use stupid methods like conjugate gradients I mean some solvers for quantum problems use conjugate gradients they start from a given guess of the wave function and then they go down in steps until they reach down to the ground state okay so in other words there are some methods I mean there are plenty of methods to optimize a quantum problem I don't want to go into the details of that I just want to mention that they all start from a guess of the wave function typically and then they eventually bring you down to what is the ground state and the energy is a function of the wave function is the expectation value of the Hamiltonian in a Hilbert space right so this is not just one dimension it's a Hilbert space let me now see back to molecular dynamics I do one step my positions change these positions are going to change slightly the Hamiltonian changes well this energy as a function of the wave functions will also have a slightly different shape not this one but slightly different let me now introduce I hope you can see that a third axis which points inside the blackboard which is the axis of the nuclear positions again I'm condensing 3M coordinates into one just for the sake of clarity but they are orthogonal ok they go inside the blackboard moving by one step means that I'm moving one step in this direction inside the blackboard so I'm not going to use a different color otherwise you're going to miss it so now I'm at a different value R I move my particles 3N particles well at this point I will have a slightly different potential as a function of psi because the Hamiltonian has slightly changed so it will be something like this for example and there will be another minimum and that minimum will not be exactly the same as the one you found before there will be a slight difference in Hilbert space this point will be the one if the wave function was exactly the same just translating everything by one delta R but it's not correct that this is the ground state because the Hamiltonian has likely changed so the minimum has also changed a little bit the wave function has to change so it will be slightly different in Hilbert space fine but as we just discussed if I started from the previous wave function that would give me a very good estimate of the minimum because the parabolas change on the a little bit so if I use the previous ground state as the initial point to evaluate the new minimum I'm going to find the new minimum in a very short time because I'm very close to the new minimum if I move my delta R by only a small amount new wave function means the solution of this problem with the new value of R and of course I can continue this and I can continue forever I can evolve in the direction of R which is my molecular dynamics trajectory and every time I will be able to identify what is the minimum so this curve here will be the curve that will give me the set of instantaneous minimum of the wave function when I move my trajectory in time that is when I evolve my R now Borno-Penheimer dynamics is precisely the set of these points that is every value of R in the ground state and I always choose as a wave function the one that corresponds to the minimum of this parabola the ground state for a specific value of the position of the particles now the idea of current Parinello was to relax this constraint and let the wave function oscillate about the minimum with the harmonic dynamics that goes to the center of this in the Parinello method what you do is you instead of finding every time the minimum of your potential you allow your electrons these are wave functions to oscillate and to follow the ground state adiabatically now let me try to show this with a classic analog you have a container a cup and you have a ball in the middle and you want to make sure that your ball is always at the bottom of the cup when you move your container suppose you have a cup I mean this container now in principle when you move the cup if the ball is very fast the ball will always be able to adapt to the minimum this is the Borno-Penheimer approximation you have a fast degree of freedom that is coupled to a slow degree of freedom if the slow degree of freedom moves slow enough the fast degree of freedom will always be able to remain at the bottom of this container but that means that every time you move the cup, the container your particle has to you have to wait until your particle manages with a few oscillations to get down to the bottom in fact if you give the particle a little king you just can drag your cup around and the particle will always be more or less oscillating about the ground state the difference with this approach is that you don't have to wait for the little particle to reach the minimum every time you can just drag your cup and as long as the motion of the fast particle the oscillation is fast as long as this is fast you don't need to let the small particle to reach the bottom of the cup every time ok, so this is exactly the spirit of the carbon and the approximation you never let the electrons go to the ground state but you let them oscillate while this big container which is the potential energy surface as a function of the electrons when you change the position of the nuclei you just let the electrons go around with the fast dynamics and they will follow adiabatically this is the ground state they will never be exactly in the ground state but on average they will because if this is oscillating and it is oscillating fast they will actually on average the electrons will always be on average in the ground state never instantaneously but on average they will now the advantage of this of course that you don't have to wait for the fast particle to get to the minimum every time you move the cup you just let the particle move dynamically ok, which means this method can actually be much faster than standard methods in which you you impose the Born-Oppenheimer approximation by letting the wave functions go to the minimum alright, this is just to give you an idea of this is not the only method this is one of the most powerful ones to speed up an ab initio molecular dynamics calculation that is to handle wave functions in order to be able to follow this adiabatic ground state as in a very efficient way which is what you want, particularly from a computational point of view oops, I have about 30 minutes is that right? yeah, so I would like to conclude here the lecture the theoretical lecture, if you don't mind I will just show you a few examples of my own research exploring planetary interiors it's going to be very light, so don't worry too much I mean it's can I have the screens? oh, no, sorry, display display on this maybe no I'm going to go to the same place now I don't know what to do automatically display to do an issue probably this was not connected okay, good, I think it was just a connection okay, good okay, so this is just going to be a brief journey into the center of planets with ab initio molecular dynamics and I'm starting with this slide sorry, this is a talk I gave recently in the African University of Science Technology I'm starting with this slide just to give you a feeling, I mean something you certainly share about how little we know about the even the Earth's interior, we know very little to the extent of of course science fiction has been extremely prolific in coming out with ideas about what could be the shape of the I actually like this one because he recommended for adult entertainment and I don't know why all right, so this is what we know about the interiors of planets let me also mention that there are a number of thousands of new planets that have been discovered recently which do not belong to the solar system, in the last 10 years there's been a race in the discovery of new so-called exoplanets so we used to be starting this 8 or 9 planets so the statistics was very limited and now we are actually able to understand much more about formation, structure and properties of planets because we've been we are discovering I mean almost one new planet per day nowadays belonging to other solar systems thanks to improved observational techniques but anyway, this is three categories of planets that you will certainly find in the solar system, the Earth-like planets solid, they typically are made of a core, made of iron there's a mantle, made of silicates and oxides I'm sure you're familiar with this but I just want to sketch some concepts and then they have the giant planets, Jupiter and Saturn the quasi-stars, they are primarily made of hydrogen and hydrogen of course is molecular at ambient conditions but these are extreme pressure and temperatures so the question is what's going to happen to hydrogen at those conditions and there may or may not be a core we don't know, in fact these kind of simulations are also helpful because they help constrain densities at those conditions and help planetologists determine whether there is a core, for example we don't know whether there is a core or not this is to say, we know very little from direct information about internal information about these planets the closest approach was the Voyager spacecraft at least in Saturn and Jupiter there's new probes being sent but anyway, they only scratch the surface they only see the first 10 or 20 kilometers under the atmosphere of these planets there's no way they can penetrate this for the Earth of course we know much more although actually the deepest hole, man-made artificial hole, is only 15 kilometers so direct information about the Earth is not direct it's indirect, it comes primarily from seismic waves we know a lot about the Earth thanks to seismic waves and needless to say there's no seismicity in planets like Neptune and Jupiter because they are fluid so we're really left with very little to try and understand what these interiors of planets are made of if not global densities for example we know that Neptune and Uranus are much more dense than Jupiter and Saturn implying that their composition must contain not only hydrogen but in fact in large amounts of water, methane ammonia for example or molecules that were there when the solar system formed what makes this understanding complicated is also the fact that when you enter inside these planets the temperature and pressure also increases dramatically for the Earth we know that the pressure at the center of the Earth is about 3,600 sorry 3,600,000 atmosphere 3.6 megabars we know that pressure very well even though we have never been there because that's determined by gravitational forces so as long as you know the density of the Earth the mass of the Earth you can easily determine the pressure at the center of the Earth and we know that with an uncertainty of less than 1% so it's 361 gigapascals so 3.61 megabars we know much less about the temperature inside the Earth which is actually surprising when I learned it the first time the uncertainty on the temperature on the center of the Earth is about 1,000 Kelvin so it could be 5,000, it could be 6,000 it could be 7,000 there's no way we can determine that unfortunately from the surface with some observation even less is known about the other planets we know well the pressure at the center is reasonably well known but temperatures here are constrained for Jupiter we are talking about 30 megabars so 30 million atmospheres and temperatures in the range of 20,000, 30,000 Kelvin and compositions are here but again we think there is water, methane and ammonia we are quite sure there is oxides and silicates we are quite sure there is iron in the Earth although it could be mixed with nickel as a matter of fact we are quite sure about densities hydrogen being the lightest element the only density that is compatible with Jupiter and Saturn is 90% hydrogen and 10% helium there is no other possibility given the... so the question of course is given perhaps compositions how can we understand the properties of these materials hydrogen, water, methane iron at those conditions these are extreme conditions we are talking about million atmospheres and several thousand Kelvin are their phase transitions, are their phase changes are they liquid, are they solid there is nothing that can tell us in principle by looking at planets what are the properties of this material let me just give you a very stupid example you are all familiar of something that changes structure and properties under pressure this is of course carbon carbon goes from graphite which is a layer substance and I am sure how many of you know what is the most stable phase of carbon at ambient conditions graphite or diamond? graphite? anyway for diamond? no? good it is actually graphite in fact the reason why it is graphite is obvious it is because graphite is very abundant diamond is not abundant on the earth surface so thermodynamically if diamond was most stable then it would be much more abundant than it is in fact diamond exists on the earth surface and you can find it in mines because it is produced at pressure that temperature where it becomes thermodynamically stable at about 60 kilometers depth and then it is brought up to the surface by convection on geological scales so diamond actually is not stable at ambient conditions but it becomes stable as soon as you consider conditions that are only 60, 70 kilometers deep inside the earth by the way just to out of curiosity this is when this was discovered when people realized that graphite was stable but diamond could be stabilized by pressure and temperature there was this huge race across the world to try and synthesize in the lab diamonds by squeezing graphite inside the giant press and general electrics won this race in 51 and got the patent for this and actually general rectics became a very large company primarily thanks to this patent in 51 now 99% of the diamonds at earth surface are artificial diamonds they are now produced with high pressure, high temperature synthesis of course they are very small the ones that you find in gems in jewels are natural diamonds but they are very rare and this is why they cost a lot the other ones are typically small they are primarily used in drills but again the very large majority of diamonds that we know are actually artificial diamonds nowadays so there is interesting in high pressure science from the point of view of material science if we could synthesize new materials there is interest also in fundamental physics I mean you are all physicists and there is an extremely interesting problem which is the determination of the phase diagram of hydrogen turns out that hydrogen as I said before is molecular in conditions but eventually it should metalize and this is a very fundamental process I mean the simplest element in the periodic table that turns into a metal and that is something that has not been yet achieved experimentally I will show you some pictures later on about this problem so how do we achieve high pressures and temperatures in the lab before we even start discussing about computational studies well there are two main methods one is shock waves you take a bullet on top of your sample you put the sample here and you generate a shock wave and this shock wave typically reaches pressure and temperature which are comparable to the one that you can find in planetary interiors the problem with this technique of course is that every shot is essentially blows up the entire apparatus so it is extremely expensive and only a few labs in the world can afford this kind of experiments I mean of course in the US it is leading in Russia as well a much more affordable from the point of view of at least the financial in terms of cost is this diamond anvil cell technique you take two diamonds diamonds are expensive but if you take small ones a diamond is the hardest known material so you can place you can manage to place your sample between the two tips of a diamond you squeeze the two tips you put a gasket around it and this way you can actually reach pressures of the order of one megabar one million atmosphere in a controlled way and you can keep the sample there for a long time for weeks or months the difficulty of course is that it is very difficult to do experiments because you actually have to get to the sample fortunately diamond is transparent so you can enter with lasers but it is very difficult to do other kinds of experiments so with these techniques you can easily get to these pressures and temperature with shock waves with diamond anvil cell we are still limited to this range of pressure and temperature very far from the center of the earth although at low temperature actually it was at the conference last week and they reported the first observation of room temperature compression up to one terapascal it was really a big achievement so low temperature can actually go much higher than that but you are limited to low temperature that is the record pressure so far so computationally what do you do I mean this is what I just explained to you a minute ago you do molecular dynamics because the system is hot at very high temperatures no quantum effects typically at 1,000 Kelvin you solve Schrodinger equation for the electrons with the approximations I discussed before and you just realize that the energy you extract as a ground state is the same that you are supposed to use this is exactly what I was discussing at the blackboard before and again just to repeat what I said before ab initio molecular dynamics which is what we do I mean regularly is the classical molecular dynamics because it's Newton's equations in the potential energy surface generated by electrons in their instantaneous ground state and I don't need to discuss this further because we already discussed it at the blackboard a minute ago so this is what you see when you open your trajectories and you visualize them the comparison I mean you can also do this with classical molecular dynamics but the advantage of ab initio molecular dynamics you can see chemical reactions because this is now chemically accurate you are not bound to the user's specific potential of interaction which typically leaves particles unchanged and you allow your system to really explore the phase space in a true way by exploiting I mean the power of the ab initio description of your potential and you see for example several chemical reactions here this is actually a mixture of water and methane it's the main component of uranosep nectune and this is a pressure typically of the interior of neptune and uranose and what you yes I'm going to describe this in a minute just to give an example so I'm going to start with water and methane and planetary conditions again uranosep neptune water is the most abundant in terms of molar fractions this is a phase diagram of water which you've probably never seen because it extends to pressures and temperatures that are not the ones that you are familiar with I'm sure you're familiar with the phase diagram of water this tiny corner of the of the pressure and temperature here we are expanding through simulations the knowledge of the phase diagram and pressure is here this is temperature in 1000 Kelvin there are solid phases of course and interestingly there is a phase which is we found in simulations it's as this property of being super ionic and then it melts of course the melt melt is molecular at mild conditions then it ionizes because water starts to become an ionic fluid it loses the proton start to move around and then finally there is even metallization of water water is of course an insulator at standard conditions a very large gap not yet but it must exist because otherwise there would be nothing else that explains the magnetic field that uranosep neptune has to be generated by a metal by a conductor and the only possibility for a conductor at least based on simulation is water metallization oh we're talking about 7000 Kelvin you would have to bring it down to ambient conditions to ambient temperatures and here it is not metallic definitely actually the gap increases when you compress it at low temperature ok so it's only metallic here you cool it down and it becomes an insulator 10-12 electron volts gap so no chance of having a superconductor unfortunately sure it's no it's hydrogen sulfide that was actually at the conference where it was announced last week yes it's hydrogen sulfide hydrogen H3S H3S but I'll come back to that because that's an interesting point the superconductivity there is driven by hydrogen not by water ok so it's really hydrogen so let me just show you briefly the superionic phase that we found in simulations so the superionic means that it's typically superionic phases are common in common rare but exist in binary systems in this particular case oxygen sub lattice is still a crystal structure and the protons start to diffuse freely so the molecule breaks the oxygen atoms crystallize in a BCC crystal structure and the protons start to diffuse like if they were in C I mean it's completely fluid as far as the protons are concerned what you see here is the dynamics of a single proton moving around there would be of course 2n more where n is the number of blue balls which we don't see because otherwise it would be impossible to visualize the system and you see that the proton starts it starts originally here and then it starts to diffuse around and move around the system so it's a mixed phase of matter essentially just quickly just to flash some nice pictures of course you know some lecture in addition to water they also have methane and methane well paleontologists they say that there is something that originally was methane we don't know how methane looks like when you bring it to those conditions precisely like nobody knew how water would look like when you bring water to those conditions and there was actually an interesting study with shock waves we've just seen a kink in the equation of state of of methane they guessed that methane could dissociate at the conditions of neuron of sep neptune now if methane dissociates it's carbon and hydrogen and if carbon dissociates at those conditions it forms diamonds and if diamonds form diamonds would actually precipitate because they are denser than hydrogen and the other substances so the consequence of this is that Neptune and Uranus could contain a giant mine of diamonds in their core it was actually not completely unrealistic fortunately or unfortunately we made some more refined calculations with our molecular dynamics and what we find is that methane dissociates but instead of dissociating fully into carbon and hydrogen actually forms some longer hydrocarbons like ethane, propane and butane and so actually the picture is a bit more rich than originally thought although the possibility that there are diamonds in the core of Uranus and Neptune is actually quite high that is that the core of these planets might be a giant sphere made of diamond as well as other substances is not completely out of the picture short of time let's discuss briefly also mixtures in fact this study we later we looked at the two systems separately methane and water but of course inside the planet these systems are mixed mixed fluid so we thought we should also look at mixtures not just at the two systems separately in fact there are several reasons to why mixtures are interesting not only for planetary point of view if you think about it methane and water as the prototypical example of interactions I mean methane does not like to mix in water at ambient conditions and why should it mix inside the planets there are also other reasons why this is interesting and for example another reason is because at low temperature these kind of mixtures form very interesting compounds known as methane hydrates these are actually believed to be one of the future sources of methane because they are very abundant in the deep in the deep ocean so we carried out simulations in the mixtures and here is now again the same movie I was showing to you before let me just focus on one aspect here these colors here are the protons the white and blue balls are the protons so this is carbon and this is water, oxygen and the color of protons is blue if those protons started originally from a methane molecule and it's white if those protons originated from a water molecule at the beginning of the simulation so what you can see here is that protons have completely mixed and even though the molecules are still vaguely present at least as chemical entities they have been exchanging their protons continuously to the extent that now the protons are completely mixed there are some white ones and blue ones in this methane molecule for example so the protons have been really going around and jumping from one molecule to the other one on average however the molecule is still vaguely there as well both water and methane the interesting thing here is that there is no formation of longer hydrocarbons so formation of diamond which is something that has to start from the formation of carbon-carbon bonds apparently is not predicted when you put methane into a mixture so the presence of diamond is actually less I mean after we did this simulation let me just quickly jump to Jupiter and Saturn they are primarily made of as I said hydrogen and so let me now focus briefly on hydrogen as you probably know hydrogen under conditions is a very nice molecular system H2 is a very stable molecule but it's actually one of the first if not the first let me go back to the history here the first calculation in quantum mechanical calculation of an extended system Bloch's theorem has just been introduced quantum mechanical just been introduced we're talking about 35 the first calculation for an extended system in quantum mechanics was done by U.G. Wigner and Huntington in Princeton at the time and it involved the possibility that by changing the volume hydrogen, which of course they chose hydrogen because it was the simplest possible system with just one electron would transform from a molecular phase into a monatomic phase and it actually came out with a predicted pressure for that transition it was 35 kB very low pressure, something you can easily achieve nowadays in the lab if that of course had been a correct calculation which was not by the way the point of view, sorry for this stupid slide but I mean this is equivalent to saying that hydrogen which at ambient condition actually behaves like a halogen I don't know why Mendeley put it on the left side of the periodic table I should have put it on the right side I mean it's also a correct position because it's one electron less than helium and definitely it's proper that ambient conditions are much closer to those of the halogens than to those of the alkalis but as a function of pressure the electron and beginner found that hydrogen essentially starts behaving like a standard alkaline metal now the truth of this statement has very important implications for planetary science because if you consider now Jupiter and Saturn of course at the beginning in the atmosphere this is a molecular phase but when you go deep inside the planet definitely you're going to encounter a transition in which molecular hydrogen breaks down and forms a monatomic fluid precisely like sodium, potassium and all the alkalis now there are two questions associated with that one is at which depth this occurs because the depth at which it occurs determines where the magnetic field that is observed in Jupiter and Saturn comes from but even more importantly is whether this transition is or not a first order phase transition in the solid of course a transition from molecular state to a monatomic state has to be first order but in the fluid well you don't know I mean there are examples of first order phase transitions state the gas to liquid transition for example but they are very rare I mean there's only one as far as I know gas to fluid however if this transition took place in a bright way with the first order phase transition that would correspond to a sharp density change and that would have enormous implications for our understanding of these planets like for the Earth there is a density jump there are a reflection of waves there is discontinuity in thermal conduction I mean the structure of the planet would be layered instead of being a uniform body and that of course would change completely our understanding of these giant planets so the fact whether this takes place in the first order way or not and this is a dramatic change because you're dissociating the molecule so we carried out simulations on this some time ago this is the pair correlation function as a function of pressure for increasing pressure I didn't mention the pair correlation function I should have anyway, pair correlation function is a very powerful tool to determine the structure of your system when you do molecular dynamics particularly in the fluid it is essentially the probability that if you sit on an atom another particle at the distance are from yourself so if you sit on one proton on one hydrogen atom what this is telling us is that there is a very high probability that there is another proton at the distance about 0.75 from yourself 0.75 Ongstroms that's exactly the molecular distance so in fact in this system here there is probability one that you have another proton at that distance in other words the system is in a fully molecular state because every atom has probability one to have another partner at the distance corresponding to the molecular distance from itself and then if you go beyond that you'll find all the other molecules all the other atoms at longer distances but the interesting thing is that when you compress this molecular fluid at some point in fact there is a question whether this is abrupt enough but in a rather abrupt way in only a few tens of GPA this probability becomes a rather constant one, a flat one implying that there is a probability to find a partner at a given distance which starts from 0.8 but there is no specific peak at one the molecular peak has completely disappeared it's equally probable to find a partner a proton at the distance 0.8 1, 1.2, 1.4 2 or whatever the molecule has completely lost its identity in fact if you take a snapshot you'll find this kind of warmth percolating through the system the molecular entity is completely disappeared now of course this is first order the only problem with this simulation is that this was carried out at a temperature which was much below the temperature of Jupiter and Saturn there are technical problems in reaching those temperatures so there is evidence that the transition will be first order but the evidence is only at low temperature for a time being I have 5 minutes so let me see very quickly also because this touches on the issue of potentials now so earth, I mentioned before the temperature problem we don't know the temperature of the earth at least in the core but if you think about it the core is divided in two parts the inner part is solid and the outer part is liquid and they're both iron liquid outside and solid inside I think you're all familiar with this now if you take now the interface between the outer core and the inner core this is something that is there at the thermodynamic equilibrium since millions of years so the temperature there must be exactly the melting temperature of iron at that pressure 330 gigapascals 3.3 megabar so if we manage to do an experiment here at the surface of the earth or if we manage to simulate on a computer and determine the melting temperature of iron at 3.3 megabar we would know what the temperature is at the center of the earth well there are corrections due to the fact that the outer core is not pure iron there are impurities so there is a little bit of the pressure of the melting point this is a correct statement if you're able to determine the melting temperature of pure iron at 3.3 megabars you'll be able to tell what the temperature at the center of the earth is so we we try to do molecular dynamic simulations I just want to mention one thing let me skip this let me go to this one this is actually another system but determining melting temperature turns out to be extremely difficult because it requires very large unit cells and as we discussed before it's a change of phase so that's not something that you can easily simulate with a simple molecular dynamics simulation it really requires very large sampling of the phase space something you can typically hardly do with ab initio molecular dynamics in fact it's almost impossible to determine with ab initio molecular dynamics a melting temperature of a system this is actually a problem that we faced in a lot of other cases including for example the study of liquid silica in which we performed this simple experiment, computational experiments of determining the equation of state with the standard classical potential this is a pair potential and with ab initio methods you'll see also the noise in the calculation which means the calculation was actually very heavy from a computational point of view and here is the experiment experimental data unfortunately the classical calculation the red curve you could achieve statistical accuracy but of course from a chemical point of view the pair potential is not correct, it's not giving the correct interactions ab initio molecular dynamics chemically it is correct but there's not enough time in a simulation to sample the phase space efficiently and therefore to determine a property as simple as in this case the volume of a fluid so there's not enough time silica is particularly nasty because the viscosity is very high so in a standard molecular simulation you only see a few jumps of the atoms not too much so we came out with this already a few years ago with a technique that essentially uses the information from ab initio trajectories to refine the parameters of 2, 3, 4 body embedded atom models embedded atom models the ones that I've shown you before contain a large number of parameters those parameters are typically extracted based on comparison with experiments instead of doing that we use the ab initio molecular dynamic trajectory to generate the best possible potential at a given pressure and temperature based on information extracted from ab initio and this is just to show you what we get in terms of agreement with experiments now with this new potential which is a classical one but whose parameters are made based on fitting to the ab initio so it unifies, it merges into a single potential the statistical accuracy of a classical potential with a chemical accuracy of the ab initio trajectory so anyway to cut the long story short this is what we get for iron 5,400 Kelvin whatever that means and with this I think I just wanted to thank you with this oh sorry I escaped this I just wanted to thank you and if there is any questions I would be happy to answer