 Okay, so let's change the topic a little bit. So my name is Lily. I'm a postdoc in condensed matter group in ICTP and I work, I'm working with Sandra Scalow. Today I'm going to show you the face diagram of Aaron up to the earth in the cold conditions. Before I go to the neural network potential I developed for Aaron, let's let me just give you a very brief introduction to the earth's interiors and to explain to you why we need to care about Aaron. So every year, there's a lot of earthquakes happening all over the world. On the one hand, it caused harm to human life and human properties. On the other hand, it brings invaluable information about the earth interior, and from this seismic observations, we could get the seismic velocities as a function of depth. And based on the discontinuities in this distribution, we could divide the earth into three layers across the mantle and earth core. As for the earth core, we could further divide into the outer core and the inner core. For the earth outer core, its shear velocity is zero indicating it actually in this liquid phase. For earth inner core, its shear velocity is not zero, suggesting that it's actually in a solid phase. However, geophysical studies won't tell us what is the major conversations in earth interior. For this, we have to use geochemical studies. So, now we know that the major elements in the earth's interior, in the earth's inner core, which is made of iron, nickel, silicon, oxygen, and sulfur. However, neither of these studies could tell us what is the most thermodynamically favorable phase for Aaron. There are a lot of candidates such as BCC phase, the AGCP phase, and FCC phase. So for this, we use density functional theory based method, because clearly, it's very hard to perform diamond adversarial experiments combined with x-ray diffraction pattern at earth inner core conditions. Now, those of this experiment data are not very consistent with each other. The last two decades has seen a great advancement in what we call the computational mineral physics, which is a subject that focuses on the physical properties for different minerals that is relevant with the earth's interior. This is exemplified by these two studies that published in the early 2000s. These studies are actually about the melting curve of iron after the earth core inner core conditions, and these studies actually use only 200 atoms. 200 atoms are actually enough to yield a very accurate estimation for the melting temperature. So why we need to use machine learning potential. This study is really inspired by our previous work done by Benosca. In their study, they perform ab initium molecule dynamic simulations in the MPD ensemble at 360 GPA and 7000 Kelvin. So they found it out that if they perform simulations with the less than 500 items, okay, you can see that the PCC phase is no longer stable. It will automatically switch into the ACP phase. However, if you perform simulations with more than 1000 items, okay, PCC phase is now stable. So clearly, the stability of PCC phase will depends on the how many items that are used in your simulations. So you might wonder that whether the simulation results are converged with respect to the number of atoms they use in simulations. So for this we couldn't use ab initium molecule dynamic simulation animal because it's very expensive. So for this we use machine learning potentials. So we choose, we choose this deep and deep kid package to develop our machine learning potentials. I just want to mention one point, what makes the different difference between this kid and other packages. This is because of the structure descriptor, you can see in the figure B. So if you want to know more details you can read the original paper by Lin Feng Zhang and the latest work, which is overall review, which is given by this paper. So in order to generate neural network potential, the fourth thing is how do we get this training data set. So in this study, we follow the idea, which is called the optimal potential method. In this method, starting with the several configurations, we use a deep MD kit to train for neural network potentials with these four neural network potentials share the same neural network parameters other with different random states. We use it for neural network potentials to run MD simulations. Based on this, this arrow indicator, which is actually the maximum for deviations given by this for new one new one network potentials, we choose several configurations. Once these configurations were then labeled with the DFT simulations, the labor the DFT labor structures were then incorporate to retrain the full neural network potentials, and we repeated this whole process until the labor structures for for below around 10% In this study, we would like to cover a different iron face includes including PCC, it is a BFCC and a legal phase. So considering the extreme conditions, we also want to cover in the pressure and temperature conditions that is starting from 4000 Kelvin and 75 GPA to around 7000 Kelvin and the full 450 GPA. So one, one point that it needs to take a very special kind of attention is that because I work, work on this metallic system at high temperature, we really consider is a simple electric excitations. Okay, so that means we now use the moment extension of density function theory, which is also called a final temperature density fun theory. But now, the extended collision functional include the entropy difference between the non interacting electrons and the interacting electrons, electrons. After the training data set is generated. We retrain only one neural network potential, and then the final training root mean squared error for pressure is around 0.5 GPA, and for energy is about five and people Adam for force is about 0.2 seven you a point strong. So if we characterize the performance of the DP model, we use the train the neural network potentials to run dissimulations for different face, and for at a different conditions. So we select several configurations, what then would they build with the DFT simulations by comparing the prediction given by the DP model and the DFT simulations, we could calculate the test error. So for the energy is tested error for if the system is less than 200 items, it is around the five and even Adam and for force about 0.3 and even Armstrong for pressure is about 0.6 GPA. We also consider the final size effect. For this we run a bigger simulation cell for, for example, for it to be our right now you use a 216 items. Okay, so, but the performance is still consistent with the smaller cell and with the, with the, the, the, the error in a training data set. So the overall consistency suggesting that there's no underfeeding or over floating problems in my deep, deep model. In the figure on the right, I should actually show this normalized force error because somebody is very interesting in this figure. So it's actually measure the, the, the first difference between the DP model and the DFT simulations divide by this absolute value for the force. So for all the conditions and for all the face, the relative force error is below 10%. To determine the face diagram at high pressure and temperature, we not only need to know the pressure, the force and the internal energy is very accurate. We will also need to know the friends the difference between the DP model and the DFT, DFT simulations. For this, we run molecular dynamic simulations. This is only example, a given example for the ACP iron. We run, I'm dissimulations at around the seven gram per centimeter cube and a thousand Kelvin with only 100 items. Then we also select different civil say ratio. Then we select several configurations, which was labeled with DFD simulations. So in a, in a figure on the top, I show the energy difference between the DFT model and the DP model. And we can clearly see that the energy difference is a force within around five MVV per atom. Let's think about the free energy perturbation theory for free energy perturbation theory. In the first order, the friend difference is actually equals to the internal end difference. So from the table from figure on the top that shows actually our friend is very close to the DFT to the DFT potentials. We can also incorporate the second order corrections that is related with the standard vision about this internal energy difference. So by plugging all the numbers into this formula, we derived that the friend, the deep, the friendly accuracy is below one MVV pattern. So this is, I mean, it's very accurate. We also compare the relative force error, which is very consistent between the DFT potentials and the DP model. In the end, the last property is very interesting. We compare the difference of Sigma 33 minus Sigma 11 between the DP model and different simulations. So this, this, this difference is very, is very important because for a nice social bigger solid face, you would like to hear the very accurate cell parameters. Otherwise, we will do some dimensions for the solid face. You will have extra elastic energy in it. So that will destroy your face diagram calculations from our study this difference is for within around the two GPA and the absolute value is around zero. So this slide is actually my personal interest because I would like to know whether some like discrims in the leadership is caused by simulations with different simulation package or different pseudo potentials, or different extended collision functionals. So for this I reused the previous snapshots and redo DFT simulations. Okay. But for different simulation package, they have different zero energy. So now the absolute energy doesn't make any sense. So what we can measure is that the Internet difference between the first configurations and the configurations in the in the following. So let's say not from the figure on the top, you can see that for different simulation package and the external collection functional, or different showed the potentials, they give a very similar, or exactly the same energy difference that actually indicates that if we will do simulations with the in the top, you will get the exact very similar the face face diagram for our comparing to the simulation given by Conan special. So if you find some great discrims between different studies that must not be caused by simulation itself, it must cost by something else. So we also compare the relative force. And the last point is that we compare the Sigma, the difference of Sigma 33 minus, and the Sigma 11. And also again, this quantity is very is very consistent between different different potentials. Okay, so that means if you run simulation with advanced, you will get exactly a C was a C was a ratio, compared to the simulation by vast. In the ratio in the literature that for this quantity, the C was a ratio is scattered. Before we apply our potentials to a very big system, we would like to know whether we can do that. So we need to again test the final size effect. Okay, for this we run a very big simulation cell with more than 400 items. Okay, but here we don't do for all the faces at the, at the issue conditions, we only select several conditions to do a test. Again, the performance is very consistent consistent with the smaller cell, and based off free energy perturbation theory, we know that the accuracy of our free energy is less than three and maybe Adam. So that's very accurate. We also do a little bit comparison between the previous and potentials, because I mean, the embedded Adam method has been frequently used in a literature to study the metallic system. In this study we use a parameter parameters from this binosical 2002 2001 paper. So, for the, for the relative energy era, which measures the energy difference divide by the fluctuation of the energy for the em potential is, is a relative energy error is more than 10%. And for the relative first era, the em potential gives about more than 30%. So that is a little bit surprising because in the past, people think that the em potential is very accurate. And actually it's not that accurate. So I don't think I have enough time so I just give you a few, few cases, and to explain to you that actually we can use a DP model to solve some longstanding problems in earth science. So the first key, the first example I want to show is that the sea was a hit ratio for it to keep an iron at 360 GPA. For it to keep an iron is called audition number is a 12, however, for this 12 neighboring items, it distributed evenly. That means the atomic distance that is represented by this blue era is different compared to the atomic distance that is represented by this red arrows. That actually has some impact on the sound wave propagations. So if the sound probably in this way, they probably has a different sound velocities, if if the sound propagate in this way. So that's a big impact for how do we interpret the seismic observations. To calculate the sea was a ratio we could use a different method, for example, we could run nvt simulations at a fixed volume that will adjust the sea was a ratio to one to find the one point that the difference between the sigma three and the sigma one one is zero. And then we use thermodynamic integration method in this method, we vary the sea was a ratio, then then we use thermodynamic integration to calculate the hammers of frenzy, we found one point that is how much the frenzy is, is at its lowest point. Our results are consistent with the previous some of them integration method, but has a very large discrepancy compared to this method. But so we couldn't figure out what actually causes this, this discrepancy, but what do we can see that our simulation are converged comparing with the experiments, or can say that our sea was a ratio is actually larger than the experimental work. However, for, for this such this very difficult experiments, they only sometimes you only have around two diffraction peaks, I mean based on two diffraction peaks, you have a two premise you have a two diffraction peaks, we, it's very hard to evaluate the error in this sea was a sea was a ratio. In the end, we calculated the is lasting constants of the C11 and C33 as function C was a sea was a ratio, because in the past people will think that if the sea was a sea was a was a ratio is close to the ideal value of around 1.63 C C 11 and C33 will be equal. But from our simulations, we found that it's not equal that access in suggests that for elastic constants isn't it's not only related with this shorter range correlations is also related with a long range structure correlation. Based on our elastic constant calculations at a 360 GPA up to 7000 Kelvin, and it's the results from our study agrees very, very well with previous studies. And based on this elastic constant calculations, we could estimate. Okay, the sound of velocities at earth in the core conditions. So our calculation suggesting that for the for each CBR is a shear velocity is actually 20% larger than a real observations. So that's actually the first order of times problems, how do we explain the very low shear velocities at the inner core conditions. So we are still working on that to find the plausible possible cause for that. So this probably is very is more related with the reality properties for our own. So at a high temperature high pressure, especially at a, at the earth this very long age. So the, the, the solid is no longer elastic, it has some reality properties. So from MD simulations for each to be our own with the 447 items. We also put a one weakness in it. Then we calculated the, the mean square displacement based on the mean square displacement we could add, we could estimate this effective self diffusivity. But in order to calculate the real diffusivity we need to know the incredible because in concentrations for this, we use thermodynamic integrations. So this is particularly the French difference between the perfect crystal and the crystal with the one we can see, and our results are shown in the insect. So it shows that comparing with the previous work, our, our, our value actually is a 1.5 EV lower than previous estimations and this discrepancy is clear by by the hominicity that is not included in the previous study. So this, all this for the, for vegan free and vegan form of formation free energy, and this effective diffusivity we could estimate the self diffusivity for a real system. Based on this we could also get calculated the viscosity that's not shown here. So for the take home message, the optimum potential method is very efficient in generating the training data set. So we have was set up a very excellent RN potentials. So well constrained the sea was a ratio for the first time also for the first time, the last constant for each is to be our own. The self diffusivity has also been determined. So this is also ongoing, ongoing process in order to determine the whole face diagram that covers the from, covers from the whole in interior conditions. So thank you for your attention. Any questions. Thank you for the nice talk. It was very one for work. So, in the beginning you mentioned that I will meet you MD is limited to hundreds of atoms. So but it is says dependent, you have to go to thousands of items to decide to calculate the property that he want. But in your model you, you didn't go to thousands of items to validate it right. Yes, yes, although I didn't go to 1000 items, but I did a test at 500 items, it doesn't show any finances effect. So I think the potential is converted. So probably this, this phenomenon is not related with a long range of potentials is more related with the collective collective motion of items. So that is different, different things. Yeah, I have a question related to this I mean I know this is not your work but it's the interpretation of this data seems a little bit dubious to me because if this conversion from BCC to FCC or what it is is a rare event. Right this is a single observation for the 432 atom cell. And then you have the longer trajectory on top that is twice as long maybe you don't observe the event but is that really proof that this was the size effect at all I mean this could just be that if you run the top one for longer that at some point it switches right. So that's also the point we would like to try so yeah. Because for ab initium oligodynamic is this is the largest cell you can you can do. I think for this simulation is you will be wrong around one year. Okay, thank you again.