 minutes to come back and then you can continue with the next. Let me start showing my screen. I think almost all people is there so whenever you want. So I'll slowly start. So I hope people had a chance to grab a nice coffee. So just to a brief recap, so we talked about atomistic simulations and we talked about translating materials and molecules into design matrices using representations. Now it's the machine learning potential part. So the machine learning potential basically is rumored to be this device that has the accuracy on par with amnesia methods, but at a cost that is just a little bit higher than force fields. So to compare it with density functional theory, so the density functional theory can handle hundreds of atoms on a time scale of picoseconds and with the machine learning potentials we can do much much more. This is mostly because of the favorable linear scaling compared with the cubic scaling in the case of DFT and we can, I often run it just on a laptop. Now so how does it work? So first of all I would like to use a black box view, this black box view. So we have certain configurations out of atoms in our training set. We label them, meaning we compute the energy and forces using DFT, although it can be other electronic structure methods. And then we feed this information to the neural network, although it could be also a Gaussian process or something else. And then when the new configurations come seen, then the machine learning model can give us speedy predictions of the energy and forces associated with these structures. And of course the black box view may not be very satisfying to you, so here's an alternative view that starts from the atomic environments. So if I invite you to look at these two configurations here, what do you see? And your answer might be that on the right we have like a solid configuration on the left, this looks amorphous. And the reason why you think this guy here looks like a solid, it's because if we look at individual atomic environments, if we sit on an atom and look at our neighborhood, we see very similar atomic environments over and over again. In this case, it's FCC. Now, actually, even within the liquid, we have these similar atomic environments. It's very hard to see, but it's there. We can even find solid-like environments in the liquid. I'll elaborate this point later when we move on to the examples. And of course, this is very hard to identify using naked eye, and therefore we rely on these popular representations, including the soap representations that we have talked about during the previous lecture to characterize the atomic environments so we can compare them. So what does it mean in practice that we have similar atomic environments over and over again? This means that if we compute all the configurations using quantum mechanical methods, it's quite wasteful. The reason for that is that so if we take the locality, if we take the near-sightedness approximation, if we assume the energy associated with each environment is almost completely determined by the environment itself, by the nearest neighbors, then we encounter these environments over and over again. But each time, if we have to recompute them by solving quantum mechanics, that doesn't seem twice. So what we can do instead is that in our memory, we have a collection of atomic environments, and together with the energy and forces associated with these environments. So we have these in our memory, and when a new configuration comes seeing, when new environment comes seeing, we can just compare this new environment to the existing ones in our memory and then give a prediction. So to summarize, to construct a machine learning potential, we basically follow a two-step process. I mean, regardless of what kind of machine learning algorithm or what kind of representation that you actually use, we first collect a bunch of environments, and then we do interpolation. So that basically sums up the machine learning potential. With that, I would like to move on to the applications. So let me share my screen again. Okay. Cool. Applications. The system of water. So this is a ubiquitous system, but the system of water has many mysterious properties that we often take for granted. So for example, the ice floats on water, and that's quite unusual because typically we think solid to be often denser than liquid. And the liquid water is densest at four degrees Celsius. There's also a significant difference between heavy water and light water. We have many ice faces, at least 18 of them. And one of the mystery, one of another mystery is that we have two polymorphs, ambient pressure polymorphs. We have the hexagonal ice, 1H, and the cubic ice, 1C. So energetically speaking, an enthalpy of them are basically degenerate. However, in nature, we only see hexagonal ice. That's why all the snowflakes are hexagonal. And why is that? So we trained a machine learning potential. This is trained based on the hybrid DFT, wrap B0 plus the D3 dispersion correction. We trained using the Baylor-Pollinello network, and there are about 1,500 configurations in the training set, both the energy and forces. The training set is publicly available, and you are more than welcome to look at it and play with it. So this is the standard 45-degree line that all the machine learning works show. And then we can use the machine learning potential to do actual simulation. So here I'm showing the density isobar for three phases of water. We have the liquid showing in red. This is from simulation, as well as the cubic ice and hexagonal ice. Cubic ice and hexagonal ice have the same volume. Now, let's first look at what we have two lines here. So what are they? So the dashed line is from classical simulations treating nuclei as classical particles. And the solid line is accounting for nuclear quantum effects that we have explained before, using the passive to grow molecular dynamics formulas. So you see actually the nuclear quantum effects makes liquid water a little bit denser by about 1%. And also for ice, it makes ice a little bit denser. That's quite contrary to it. We also capture the density maximum of liquid water very nicely at about like four degrees Celsius. So the experimental results are marked here using the stars. We are just a couple of percent within the experimental observation. And here I show the radial distribution function, oxygen, oxygen, oxygen, hydrogen and hydrogen. And again, from classical molecular dynamics simulations, as well as the passive to grow molecular dynamics simulations. So for oxygen, oxygen, the nuclear quantum effects doesn't play an important role. But for the oxygen, hydrogen, and hydrogen, hydrogen, we really have to turn on the nuclear quantum effects to match very nicely the experimental observation. Now just a brief recap of thermodynamic integration. And again, I got the sign wrong here. This should be flipped. So in reality, in practice, what we do is we do a thermodynamic integration from a harmonic system to a classical system. This is the first step of integration. And then we do the integration from the classical system to the quantum mechanical nuclei, considering nuclear quantum effects. So as a reminder, like for this last step, we are just integrating using the quantum mechanical kinetic energy. Now I'm going to show something that often makes people feel uncomfortable. So I compute the classical chemical potential difference between the aforementioned eyes. I see the cubic eyes and hexagon eyes. And I did that using two different fits of the neural network potential. And you can see the results are different. They are not just qualitatively different. And you can see the sign is different. Although arguably, the energy scale here is very small. So we are looking at nearly electron volts per molecule. But still, this kind of tells us just using the machine learning potential may not be able to capture the very fine difference between free energies. So what do we do? And to present this sort of schematically, so this is the problem that we have. So we have a potential energy surface, which is our ground truth, which is DFT in this case. And then we have the machine learning potential energy surface. Now, they two are very similar, but they will inevitably be some small differences. And the difference may come from different reasons. So for example, the machine learning potential doesn't incorporate long range interaction, but obviously it's there. And the difference may also come from maybe the training side is a little bit sparse at certain points. And then there's also this residue difference because of the fit. Now, how do we promote the machine learning potential results to the DFT level? How do we do this correction? And we not just want to do this correction for a particular configuration, but for all the relevant configuration. So to write down this mathematically, we can write down the Gibbs free energy of the system described by DFT. So this is the log of the partition function. And we can do the same for the machine learning potential. Now, the difference between the actual chemical Gibbs free energy and the machine learning one can be written in this free energy perturbation form. So we are taking the average of the exponential of the difference between the ensemble average of the exponential of the difference for each configuration. So typically free energy perturbation converge rather horribly. However, in this case, because the two potential energy surfaces are very similar, we can actually converge this estimator very rapidly. Typically, we use like less than 100 configuration. So we come to this term for different phases of water under different thermodynamic conditions. Here I divide the Gibbs free energy by the number of molecules so we can plot out the chemical potential. So the difference is small, right, on the order of one million electron volts per molecule. But this makes a difference. After I put this correction term back on top of this graph that I showed before, the chemical potential difference between cubic eyes and chesorganized, then I'm getting converged results. The predictions from after correct them, the predictions using two phases of neural network are consistent. So to summarize, here's the workflow of the initial thermodynamics. So the first part is what we have talked about before. We do a thermodynamic integration to compute the classical and quantum mechanical free energies. And then in the end, we always add a correction term on top to promote the neural network to the initial level of theory. So here are the results. We have the cubic eyes and we have the chesorganized. We compute the neural network results. We add the correction and then we add nuclear quantum effects. So here we can see nuclear quantum effects actually has a major effect. It's significantly stabilized chesorganized to make it ever so slightly more stable than the cubic one. So without nuclear quantum effects, maybe the snowflake that we see in nature will not have this nice hexagonal shape. And another one is the chemical potential difference between eyes and liquid water. And we computed using umbrella sampling on coexistence systems. We first compute the neural network results and the same story. We corrected to the DFT level and we add nuclear quantum effects. We can even consider not just H2O but D2O, the heavy water as well. Now also to compare with experiments and we can see we are really within like a hair compare with experiments. And not just that, even the difference between the melting point of D2O and H2O, we can predict that very accurately as well. And notice that the D2O and classical water which is the red line and the green line for the D2O here almost overlap. So the classical water and D2O have the same chemical potential and why is that? This is because when we were doing the thermodynamic integration and we look at the integrand, there's actually a reversal of this integral. So there's a little bit of cancellation of nuclear quantum effects. So this is for water. The next example is on hydrogen and then we will also take a little bit deeper on this locality argument on this near-sightedness. So for hydrogen, hydrogen is the dominant component in the center of giant planets such as Jupiter. So what happens is that on the surface the pressure is low and the hydrogen takes the familiar die molecular form. But as we approach the center, the pressure goes up and the hydrogen starts to dissociate. They become atomic as well as mentality. So experimentally this is very difficult to probe. So this is a deeply controversial topic of what pressure and temperature does this transition from molecular hydrogen to metallic hydrogen happen as well as the nature of this transition if it's like first order or smooth transition. So using a DFT molecular dynamics like this transition because we are restricted to the small system and relatively short simulation time, the transition can mostly only be probed from this kink on the equation of states. However, with the neural network potential we are able to scan the whole phase diagram. And so here I'm showing the color scale. Here is the average order parameter. Here is defined by the fraction of the bounded hydrogen. So here at low pressure, low temperature we have mostly molecular hydrogen and at higher pressure and higher temperature we have atomic hydrogen. Now this black line here is the melting line that we have computed. And then the purple line and the orange line here are the location of the density maxima and the heat capacity maxima. So that is if we plot out the density and heat capacity of the system under isobar conditions and then we trace the location of the maxima and we plot on the phase diagram. So that is the purple and the orange line here. Now from this graph, the transition looks smooth, but we want to characterize it a little bit more. And we explain the system in terms of a regular solution model. So in this picture we are saying like the system can be understood as the mixture of two liquids, like the atomic liquid and the molecular liquid. So in the regular solution model that some of us might have studied during the undergrad, the total Gibbs free energy of the system as a function of the fraction of one of the components can be written as the sum of the chemical potential from this component and a mixing entropy. So this is the mixing entropy as well as an anthropic penalty of mixing. So under this regular solution model when the temperature is high, our system mix perfectly, but when the temperature is below the critical point, then the two liquids face separate. So now the game is that we want to compute this free energy profile for our system so we can understand it, we can fit it to the regular solution model. So we did just that. We computed the free energy profile as the function of molecular fraction using meta-dynamic simulations and then we fit this profile, free energy profile, to the regular solution model and get the parameters as well as the critical point. So here's the critical point that we have located here. It's just on the melting line. So above the melting line, the system is super critical according to us. And not just that, the machine learning potential also correctly captured the ground state crystal structure at different pressure. So solid hydrogen is known to be very complicated and it can form many, many, many polymers at low temperature and different pressure. And the melting line also looks okay compared with previous experimental measurements. Okay, so the extent of, is there any questions related? Yes. Could you replay the questions I think in now it's a moment? Yes. Okay, so there's a question from an Andre, maybe like does Andre want to speak up? Hi. Yes. I was just wondering, so you were talking sort of in the beginning of the second section that for the neural networks, we essentially collect the different environments and then use them to essentially estimate the energy instead of recalculating the environments again and again. So I was wondering if in that data set that you collect, you collect specifically the environments for single molecules or if you're storing the snapshots of a system say at particular temperature or whatnot. So the systems in the training set are all bulk structures. Okay. So in this particular case, there are all configurations from liquid water. And the reason behind that I'll actually explain in a bit. Okay. Thank you. And then like there's a question from Yu Shi, do you want to speak up? Yeah, thanks. I want to ask that is the correction to the machine learning model the term U minus UML you mentioned. Is it from training a residual neural network? So in principle, I think one can do that. I have, but I also have seen people who train the difference not between the neural network and the DFT, but they train a difference between two different levels of electronic structure calculation. Let's say you can train a difference between a hybrid DFT and a PBE, for example. I have seen that. So in principle, it is possible, although, but are you thinking about training the difference in potential energy surface or training the difference in the free energy difference? Because they are different, right? One is a high dimensional object and the other is a number. It's a scalar as a function of pressure and temperature. Okay. So here you might require the DFT calculation for this correction term. Is that correct? Or maybe I, my understanding is kind of wrong. So I can explain in practice how this is done. So in practice, what we do is that at a certain thermodynamic condition, let's say I'm interested in this correction term at 300 Kelvin and one gigapascal, right? So I run MD simulation using the machine learning potential at that condition and I collect on correlated configurations, right? And then I put these selected configurations generated from the machine learning Hamiltonian back to DFT so I can compute this difference and from which I compute this delta. I see. Thanks. Thank you. Maybe I'll take one more question from Christian, from Christian. I want to ask between the baby and bitwally functional. We discovered the best parameter for machine learning potential. So this is, it just depends on the system. So there are two things here, right? So just the underlying electronic structure calculation, right? So for all the machine learning fits, it's garbage in garbage out. If the underlying theory is not great, then obviously we won't have a good machine learning potential. So the first step is always to benchmark DFT so that we select a good reference, right? So another thing is that, so the selection is not always possible. So for water, it's clear which function is better. For high pressure hydrogen, it's a little bit of a guesswork. Now, so and then once we select the underlying theory, then the question is about the quality of the fit. And as of now, it's still a little bit of art and one needs to validate and refine the potential and so on. Okay. So for now, I'll move back to the talk. There's the last part about this argument of locality because we were explaining everything in terms of atomic environments, right? We stress this concept over and over again, but how good is it acts and approximation? So how local things are? So we are going to explore this problem. So again, this is just brief recap, machine learning potential starts from atomic environment and each atomic environment gives us atomic energy. And we sum this up to get the total energy of our system, right? So let's look at atomic energy. So here I'm plotting out on the x and y axis, I'm plotting out the atomic energies from two different machine learning potential. So this is for the water and they are not correlated at all. So at first I thought, okay, this is maybe due to how the energy is partitioned between oxygen and hydrogen. So now I'm comparing the molecular energies, which is the sum of the atomic energy of oxygen and hydrogen in each water molecule from the two fits of the machine learning potential, still they are not correlated. And this basically tells us that the atomic energy that we rely on very heavily in machine learning potential is really a mathematical device. It doesn't really carry a deep physical meaning. Now, the reason why I start to look into this is because back then I was thinking about the problem of heat conductivity, which is so heat conductivity is a very important parameter that goes into the power system is also a sort of input parameter for fluid dynamics and other type of continuum modeling. Now, the way of compute the typical way of computing the heat conductivity is to use the Green-Couple relationship, which is basically found by taking the integral of the autocorrelation function of heat flux. Now, what is the problem here? So the integral costs to integrate to infinite time, but we know that if we take the autocorrelation, there's always a noise, there's a Gaussian noise. So if you actually integrate to infinite time, the sound is divergent. But if you cut off prematurely, and if the signal does have a long decaying tail, then you put a bias on our estimate. And moreover, the computational heat flux as for the atomic energy that we have talked about, and also a paralyzed force, paralyzed forces between pairs of atoms. So none of these are well defined in the machine learning potential setting as well as in many other settings as well. Okay, so luckily we've actually found a formulation that allows us to compute heat conductivity independent from the Green-Couple relationship, independent from the heat flux. So how it works is that we have this particle density field. So this is a well defined quantity, and then we do a full expansion of this term in space to give us this road shield that is at each wave vector k. Now if you do some hydrodynamics equation, which my math was not good enough to do that, but these things were solved in the 60s by fluid dynamics people. Now it turns out the autocorrelation function of this road shield has two modes. So there's one mode that is actually an exponentially decay mode, which is the heat mode that carries the information of the heat conductivity. And the second mode is actually an auxiliary mode that is related to the sound propagation. So here's the hydrodynamic equation which we will skip. So and then we did just that, we computed. So first of all we want to do some benchmark. So we benchmark on the nanojohns because for nanojohns there's a paralyzed potential and we can compute that, compute the heat conductivity very easily using the Green-Couple relation. We compute the autocorrelation function and we fit to the hydrodynamic expression. You can see the fit and the simulation, actual simulation are basically overlap perfectly. And we can also look at the power spectrum. From power spectrum we see two peaks. The first peak is the exponential heat mode that we talked about and the second peak is the sound propagation. And then we compute the heat conductivity from a different kappa and then we extrapolate that to k equal to zero, which is the microscopic heat conductivity, which can also be computed from a Green-Couple relationship. And they agree we do this at different throttle conditions. So basically this what we call the wave method gives consistent estimate with Green-Couple for nanojohns at many different conditions. So with that we can use this method with such validation we can use this method for other systems. So for example we computed the heat conductivity of the high pressure hydrogen. Again we compute the autocorrelation function and from there we extract down the heat conductivity. Okay so for the last part, so this is a little bit a bittersweet story, but the next example may build you more confidence about the locality of the machine learning potential. So this is related to the question has been previously asked. So what is in the training set? We have the bulk liquid water in the training set of the machine learning potential and but remember that we actually use the model to compute for cubic eyes and orthogonal eyes and they work fine. So I was thinking like how much can we extrapolate from this machine learning potential? Is it applicable to other eyes faces as well? So we took from this study that collected many eyes faces, some actual experimentally confirmed ones like all the experimentally confirmed ones as well as many hypothetical ones. So they plot they did this map using sketch map. You can also do a PCA map of the eyes faces and then we took we took the representative 54 faces of eyes and then using the same framework that we have talked about we compared them with the liquid water configurations in our training set. Now you can see eyes and water they appear at different places on our PCA map which is understandable they should be different. However the interesting thing is that if we instead do not compare the global structures but instead just projecting down the atomic environments we found out the local environments in liquid water completely almost completely covers the environments that we encounter in the 54 eyes faces. So what does this mean? This means that we have collected all the relevant atomic environments for these eyes faces although our training sets are completely built on liquid water. So because of that this machine learning potential train on liquid water is able to predict various properties such as density, lattice energy, as well as the full non-density of states. So these are 54 faces and we can zoom in to look at individual ones and for each one the agreement is magnificent. And because of the eye we are also able to use this machine learning potential to compute the face diagram of water right. So we have them again we have the machine learning prediction but we always add the correction terms on top and we can choose not to correct it to the ref B0 D3 which is the theory that we use to fit the machine learning potential. We can also correct it to a different DFT levels theory such as PB0 D3 and B3 lip D3 right. Those gave they gave different slightly different face diagram and overall the agreement with experiment is very good. It's like better than the existing empirical water potential and again nuclear quantum effects here play a very important role to shift the boundary around and and and that's basically it. So the take-home message here would be that machine learning potential is a very powerful tool. Now we can compute the initial face diagram. We there's still a lot of things we do not fully understand about machine learning potential and there I think there will be a lot going on in that direction particularly for the large interaction and then there's probably also a good time to revise the typical simulation the typical tools that we use to better utilize the back state of the art machine learning potential and with that that's the end of my talk and I would like to answer more questions. Okay so back back to the Q&A. So there's a question from Muhammad. Hi I want to know is this correction to MLP just for light elements because of nuclear motion? Thank you. So they are actually two separate things right so nuclear quantum effects correction is needed for light elements right so imagine if you run an initial MD simulation you still need to consider nuclear quantum effects. Now the correction term is needed if you want to correct the residue error in your machine learning potential and that error is because your potential energy surface is slightly different from your ground truth and that is a fact regardless if you run MD simulation or passing to ground molecular dynamics. Thank you. And then there's a question from William. William could you please turn on your audio? Yes thank you sorry. If I understood correctly when you were trying to train the neural network on the data you need to define some local environments for the particles that that seems to be very similar across the sample. What will happen when you have phase transition where the local environments can be really large? How can you define that kind of local environment? Right so thanks for the question. So in practice how do we decide the local environment? It's a little bit by a trial and error so what you do it so it's there's a trade-off right so if you select a smaller environment then the neural network is little it's much cheaper to train and to use right but when you select a larger environment that gives you more long long long range interaction but it's also more expensive to train and to use right so in practice what we do is we select different local environment and train the network separately using them and see what happens and pick an optimal combination. Now related to your question of the phase transition right so personally I'm not sure if phase transition would dramatically change the use of the size of at home environment so for example in this case of liquid water we always use six angstrom for our cutoff throughout and as in my previous talk we have shown like the liquid water the machine learning potential described both liquid water and ice phases very well. Thank you and thank you for the very interesting talk. Thank you I think we are a little bit one minute over time maybe I take two more questions and okay like so there's a question from Juan if that's how the name is pronounced. Yes Juan we cannot listen to you I think maybe you can read the questions because Juan is unmuted but okay I'll do that so Juan asks when our results don't fully coincide with experiments can we reverse engineer the neural network potential to reconstruct the neighboring environment I'm thinking about a distribution function or a geovar or something like that so from what I understood from this question right so there's a residue difference between the machine learning protection and experiments which come from different reasons the most important reason probably being that the DFT functional that we use involve approximations all right so I do think there there will be a lot of opportunity to add another correction term on top of machine learning potential to make it match experiments a little bit better now I don't think this has been done before although in principle since that people routinely do that when they build force fields for proteins in RNAs and DNAs I think this seems to be possible although I haven't seen anything in that direction yet oh okay so let's take one last question from Robson. Hello hello thank you for the talk I'm just wondering since we can map the phase diagrams if we can also determine the nature of the phase boundaries from the MLPs. yeah yeah that's uh so so first of all I'm I'm very cautious uh I personally I'm a little bit on the cautious side right so when you say the boundary of the so the nature of the phase transition so in a in a case of ice and liquid water so this is uh when we go from one phase to the other this is typically through nucleation right so there will be an interface between ice and liquid for now intuitively I think if you have an interface long-range interactions gonna be more important compared with if we just have the bulk faces right so I think they're because the machine learning potential is short-range so I feel a little bit uneasy to use the machine learning potential to characterize a interfacial phenomenon although maybe it's not a problem so so that's my sense thank you thank you being a as you prefer you can go ahead we have time on zoom but if you cannot continue we can stop here okay so maybe like another two more questions as well okay uh so let's see uh okay so there's a question from uh I'm sorry if I I will mispronounce your name um my honey my honey yes hi uh you actually answered my my question I had some uh soon problems but you you went on on that later so thank you okay and then there's a question for Mauricio I hope nobody's keeping a scoreboard to see how many names I have mispronounced today let's see Mauricio turns on and meet Mauricio you should admit yourself he's not replaying so let's read yourself you okay so uh Mauricio uh if if that that that is the name uh I understand machine learning potentials are very hard to generalize uh i.e. there cannot be a machine learning equivalent to charm or opos which works well for some families of materials can you elaborate on this so uh so the machine learning potential that I have trained also because uh I I'm I'm on the lazy side for a single system but I have seen machine learning potentials for a class of molecules I think the ones come to my mind is the Annie uh I think it's from uh Oksandra Oksandra uh what's his last name right it's the Annie CXX so that thing I think they were trained on QM9 they were trained on small molecular molecular data set and as a result it's applicable to a very large collection of small molecules as well and I believe the Shinets from Klaus Müller and Alex Tchenko and their co-workers it should also be applicable I think it was it can also be trained on the the collection of small molecules now back to Annie they also use the Baylor Palinero neural network architecture so meaning that so it's actually the same architecture as the ones that I have used before so it's really by a choice that I didn't train our neural network that is that can be generalized to other systems so it is possible okay so let's go to the like the last last last question from Leonardo can you hear me yes okay since you have many experimental phase diagrams can you use that as an output for your training dataset like for example you can take many studies and artificially create a data file with those data and use that as an output to train for example to use as a shortcut for your training dataset and your results you could for example take the structures from your database already and compare to the results that are published and use that as a way to try and how can I say that smooth out the results from your predictions and the ft in such or am I being so which which so what type of experimental observation are are you referring to for example specific heat you have for example you can pinpoint a phase transition from a peak on that on your specific heat for example I work with our models and start in such and generally there is the peak on specific heat indicates the phase transition could you use that the experimental specific heat as a way to create phase diagrams as a training dataset or for example you have the experimental phase diagrams you can there you look at that different papers and try to create that or am I made a mistake right so I think this this basically brings this is the issue that we have already so my thinking on on that problem is like let's say we have a machine learning potential energy surface right we also have the that we train from dft and we also have the the experimental observation and how do we build the framework that utilize both type of data right so this hasn't been done and my my hunch is that one way of doing it is basically to have your experimental observable also into your into the loss function when we train right but this is not obvious because when we train the machine learning potential against dft we are basically matching the energy and forces but the experimental observable particularly the heat capacity and all that then heat diffusion that you have mentioned they are very they are they they are not a simple function of the atomic configurations they are not directly related to the atomic environment they are related related to the atomic configurations in a very very complex kind of way so it's not completely obvious that how do we build this loss function that also incorporates experimental observable although in principle this can be done okay thank you that answer my question thank you okay so i will say this is that's all for the day like do organizers have something else to say thank you very much just remember the next session it's earlier right asia yes exactly next session is a 12 30 european time so just check what is your in your time zone one half hour earlier okay thank you very much i think we can thank you very much for organizing thanks to you vinking thank you very much again it was a nice talk yeah i think the participants enjoyed because we are getting really a lot of messages uh so you can read the top okay bye bye i am here next week