 So, welcome everybody to the Biaxcel webinar number 74. Today with us is Giovanni Bussi from Scuola Internationale Superiore di Studi avanzati, also well known as CISA in Trieste, and he will speak about temperature pressure control. In particular, first order stochastic algorithm. And I am Alessandra Villa and I host this webinar together with Otto Andersson and we are from the center of accident for computational biomolecular research. The webinar is being recorded, so do you are aware of that. And after the webinar, if you still have some question, you can join and you can interact with Giovanni using a Biaxcel.eu forum. There you will find the category dedicated to the webinar, to the Biaxcel webinar, and you will find the webinar number 74. That is the one that Giovanni Bussi is giving now. And now something about Giovanni. So Giovanni got his PhD from the University of Moderna in Reggio Emilia, in particular he was working on a bit of a initial computation. After that, he joined a group of Parinello in Zurich, and he started to work on the molecular system and molecular simulation. Then he moved finally in at CISA, and since 2020 he got the position of full professor, and CISA he was starting a group on the simulation of nuclear acid. He got several grant among others, ERC starting grant, an important national Italian national grant, and is one of the developer of the open source package plumes. In addition to this, he's also interested in the development of fundamental algorithm for molecular dynamic simulation, and that is the reason why it's today here to speak about thermostat and barostat. So, now I stop sharing, I give the word to Giovanni, please. Okay, can you hear me I should be sharing now. Can you hear me. Yep. Okay, so thank you Alessandra for the very nice introduction and thank you to both of you for arranging this very interesting and I hope useful series of seminars of webinars. So today I will talk about temperature and pressure control using a first order stochastic algorithms. So let me start with some motivation. So, so this is a ground max community mostly so I imagine most of the people here are working simulating, so the beta biomolecules, even though it might be not one of you. So when you simulate so beta biomolecules, typically you would like to have molecules at very low concentration, so in a very large volume. But you cannot do that because it's too expensive and so what you do is, you simulate a much smaller volume with primary boundary conditions. Right. But when you do so then the size of the system that you are simulating is very small so fluctuation starts to matter. And typically you want to be able to make this a very little system that you're able to simulate communicate with the outer world and to exchange heat, and that's what you do using thermostats or work and that's what you do using Barostats or mechanical work. So, in order to do so, you have to run your simulation in a well defined and controlled the ensemble so a case the case of isolated system where the number of particles is fixed, the volume is fixed and the energy is fixed so there's no interaction with the outer world is the that of the micro canonical or MV ensemble and you can simulate that using the Hamilton equations but if you want to model in some way energy exchange with the rest of the world, you have to add the thermostat. And you end up in the MVT or canonical ensemble and similarly, if you want to model exchange of mechanical work, you have to add the Barostat so that your volume will be allowed to change. And this is usually done in a setting where both energy and volume are allowed to change which is this NPT ensemble. In addition, you might also have a number of particles changing and remember if you have a non homogeneous system clearly you could should specify which type of particles are changing so you could control the chemical potential instead of the number of particles one or more of the species that are in your simulation. And often this is done by combining molecular dynamics with Monte Carlo but this is something kind of more advanced and more complicated to do. So today I will focus on basically these two cases which is the MVT or isothermal ensemble and the MPT which is the isothermal isobaric ensembles. So this is my agenda for today I will have two sections. First I will discuss thermostats and we'll give some introduction and then focus on one specific alternate which is stochastic velocity rescaling. And then we discuss Barostats and again we give you some introduction on the topic and then we discuss the stochastic set rescaling Barostat both in its isotropic and an isotropic version and I'm happy to take questions also in the middle of the presentation. I don't think it makes sense so. Or you can just wait to the end as you prefer so. So when you simulate a system which is isolated. You typically solve the amateur equations and you typically use like integrators like the frog or velocity for the. The idea here is that if the time step, the iteration time step is chosen small enough you will sample exactly from the MV or micro canonical ensemble which has this expression for the probability to extract a given set of configurations and velocities. A thermostat is basically a modification of these equations of motion so you don't solve it anymore than it's an equation but you add something to the evolution of positions and or velocities. In order to sample from this ensemble instead. You see where basically the probabilities is equal to an exponential of minus the energy of the configuration divided by KBT. I said that you could modify the equation of motion for the positions, this is possible actually this is done with so called configurational thermostats. So in configurational thermostat you modify the velocity in some way that sorry the rate at which the positions are changed. I wanted to mention this because this is a special case that is often ignored. So these algorithms are not very popular, but still I think it's worth to know that it's possible to do so. On the other hand, most of the algorithms used to control temperature are acting on the directly on the evolution of the velocity so they are kinetic thermostats. I would say all the other thermostat except for this very special selection. And again, I think this is a very interesting topic, but today I will focus on kinetic thermostats. Then you have another distinction that you should make, which is between local and global thermostats. Examples of local thermostats are listed here like the Langevin equation, Anderson, Massive, Nosei, Hoover, dissipative particle dynamics. In all these cases, you add a force to each particle, which is independent to some extent to the force that you add the other particles. Every particle is thermalized independently of the other ones. On the other hand, in global thermostats you basically control all the particles simultaneously. So one way of seeing global thermostat is to say that you have a friction, a single friction you see it's not depending on the index I, which acts the same friction on every atom of your system and this friction could be time dependent. Another way of seeing these algorithms is that basically at every step you scale all the velocities by the same multiplicative factor. So what's the important difference between these two types of thermostats? So the global thermostats, so local thermostats, thermalized also in absence of collision. So imagine that you have a simulation box like this with two subset of particles, some of them are hot in the sense that they're moving very quickly. Some of them are moving more slowly. And a global thermostat would sense the total kinetic energy that perhaps is fine. A local thermostat would find that these particles are too slow and will thermalize them separately from these particles. So if you have a system that could be decomposed and some system with very poor interactions, typically a local thermostat is a much more robust choice. Clearly it could be more difficult than this where you have some degrees of freedom that are not very well coupled with other degrees of freedom. So instead with global thermostat, you will have to assume that collisions between particles are sufficient to basically lead to equipartition. However, there are also other differences. So, importantly, the effect on dynamics of global thermostat is much smaller than the effect on dynamics of local thermostats. And this can be seen in this figure, or some specific choice of global and local thermostat but you see as a function of the control parameter tau of the thermostat. You see the diffusion constant. It's basically independent of the choice of the parameter, the diffusion coefficient for global thermostat whereas for a local thermostat. If you choose a too high friction in this case, you end up in a very low diffusion constant. So, dynamics is severely affected by local thermostats. And in addition, and this is more qualitative and probably system dependent statement, it has been seen that global thermostat are much more efficient to equilibrate condensate the phases. So, unless you are in a system where you have many particles which are really the coupled. So if everyone is interacting with everyone in some way, usually global thermostats are much more efficient. So, one way to understand these smaller impact on dynamic is by looking at this graph, you can imagine your momentum as a high dimensional vector if you have 100 particles it would be 300 dimensional vector. So, let's say that at some point, the value of the momentum vector lies here, applying a local thermostat would bring the momentum vector anywhere could move it to anywhere else. Whereas for a global thermostat you would only move it in this direction so the change in the momentum will be proportional to the momentum itself. So, imagine the amount actually in a scaling operation on the momentum vector and remember it's a scaling done in very high dimensionality so imagine this in dimensionality 300. So you move in one direction, and you do not move in 299 directions so you basically remove all the change in, in the velocities that lead to transfer of energy between particles, and all the keep the change in energy to these two global kinetic energy. That's what you do with the global thermostat. So how, how are they implemented the so basically all global thermostats can be implemented in this way you just evolve on the two equations without thermostat you compute the kinetic energy. You propagate the kinetic energy and that's what's what's not in a different way for depending on the term of stuff you are using and then you rescale the momentum with a factor that depends on the new kinetic energy and the old kinetic energy. You can achieve a new value of the kinetic energy. Then here you find a table with some example. So, velocity rescaling Berenson, they are deterministic schemes. They are here in red because they are well known for reporting an ensemble, an ensemble which is not canonical so you don't basically obtain the correct fluctuations using the schemes. Return the correct fluctuations some of them are stochastic some of them are deterministic and I will particularly focus on this one, which is stochastic and the first order. So what do I mean with first order so let's have a look at how the kinetic energy is propagated in this stochastic velocity rescaling scheme. This is the current that you expect in the kinetic energy given by the thermostat. So you see there is a restoring term that brings the kinetic energy towards its target value this KB K bar will be just a three half number of atoms KBT typically. This is actually this term is identical to the one that you have in the Berenson thermostat which is also first order but is missing this stochastic term. This is the first order which is this tau t that controls how quickly the thermostat will force your system to relax towards the correct average energy. But importantly you have an additional term here which is stochastic, and that is able to enforce the right fluctuations and here you just have to plug in the number of degrees of freedom of the system, and everything else is defined so this equation is nice because this part is equivalent to the well known and widely used Berenson thermostat and this part reconciles the correct fluctuations. Another nice thing is that this equation even if it looks a bit complicated can be solved analytically, which means that you can exactly draw a new random kinetic energy corresponding to having propagated this equation so you don't need to do use complicated thermostat as a stepping algorithm to integrate the thermostat itself. So, why is it in interesting and useful to have a first order thermostat, which gives a correct fluctuation so the traditional pipeline, the suggested protocol for many years has been to do like an equation using Berenson thermostat, which doesn't give correct fluctuations but on the other hand equilibrates very well. And then you do production using something like Nose Hoover thermostat perhaps something which typically second order and give you the correct fluctuation having a first order algorithm that can be used both for production and production it's very convenient, it makes the, this little detail that nobody wants to care about which is a choice of the thermostat a little easier to choose. So how can we see this thermostat in action and how does it compare with other possible choices so this comes from quite old paper, this is where we introduced the scheme. So here's a calculation I've done with the poly that is for a Leonard Jones solid and for a Leonard Jones liquid and here what I report is as a function of the tuning parameter this towel. I report the value of the fluctuation of the kinetic energy or of the potential energy for different schemes. So you see basically that with the Berenson scheme, the fluctuations are depending on the choice of towel. Depending on how you choose the towel parameter you will get a different result that's not good and it's well known actually the fluctuations are too small in both cases. The Nose Hoover scheme is supposed to give correct fluctuations. It does so in a wide range of values of towel, for instance you see here it does reasonably well when towel is very small, however, it has troubles. And I guess this is actually due to the way this has been technically implemented in the body, but the fact is that the questions of motion for the Nose Hoover cannot be integrated exactly so you have to find out how to integrate them and for some type of towel. You have to be very careful in not introducing integration errors for a solid there's something interesting that happens even for very large towel. You have artifacts with the Nose Hoover thermostat and this is something which is also widely known that the Nose Hoover thermostat is not working properly for a harmonica oscillator and so solid switch can be to some extent described as harmonic in some regime of the choice of the parameters give problematic results. This is a similar test done for ice and water where you see what system is maybe a bit more relevant and you see a very similar behavior and also here you see that for a large value of towel basically with the Nose Hoover thermostat you could end up in incorrect values of the fluctuations. We can also look for dynamics I mentioned these already quickly in in design eyes we took this trajectory for ice and for water so for eyes we computed the vibrational density of states just as the free transform of the velocity autocorrelation function. You see that if you run without a thermostat that's a solid line or with this stochastic velocity rescuing with very different settings, you basically get the same, the same properties. And the same is true for the diffusion coefficient. Basically you get the values which are very close to those that you obtain using a simulation without a thermostat. And this is not a specific property of stochastic velocity rescuing the fact that the dynamic is largely unaffected is a property of all global thermostats. As I explained it's because they are designed to minimize the impact on the trajectory. And actually the reason why you see this is that there are many atoms so if you imagine to apply a global thermostat to a system with five atoms you will see a lot of impact on them. And the other important thing is that we are monitoring properties that are very different from what the thermostat is acting on the thermostat is acting directly on the kinetic energy. If you look at the dynamics of the kinetic energy it will be very different with the thermostat or without. But if you look at local properties, they will typically be not much affected by global thermostats in general. Okay, a common question, at least in the past has been what about flying ice cube so this is an effect that was detected many years ago it's an artifact that comes out with velocity rescaling, and with the with Berenson. And basically you can visualize it as your system completely freezing and where all the energy goes into the center of mass. But in a more subtle way you can see that basically the equipartition is not achieved so different degrees of freedom have different amount of energy. So in this work, you can see a comparison of different schemes, and you can see that basically in the Langevin chain, also stochastic velocity rescaling, they all provide correct equipartition, and the problem is so it's not related to the scaling itself, but to the way the scaling is done so the problem only appears for balance and for simple velocity rescaling. Another, I think, interesting question is who cares about the fluctuation so of course if you are computing the fluctuation is important but let's say we are computing something else should we be careful about the fluctuations. There's another paper, paper by the group of Gader Hummer 2008. So yeah they only compared no thermostat Berenson thermostat and Langevin thermostat at different fluctuations in the potential energy that's known and expected. But what was kind of unexpected is that if you apply this thermostat in a tier and dissimulation you will get incorrect results in terms of looking at the fraction of folded protein that you have in your system as a function of the temperature. So this I think is very interesting because it means that there is indirect effect only on properties that you might believe to be sort of independent of these fluctuations. And actually even if this paper was published like just one year after our paper. And the reason why I started to work on this topic is exactly this I was playing with rapid exchange I found out that the results with velocity rescaling were wrong. And, and Micaela, my supervisor told me, let's try to fix the thermostat because I was using real poly and there was this velocity rescaling I fix the thermostat and the artifacts went away and then we realize okay this there is a very nice way of fixing velocity rescaling thermostat to make it produce correct results. So, actually, all these problems are related to the violation of the tail balance. We can quantify how much our trajectory is violating the tail balance so this is the flow chart that we have so we first evolve our trajectory using Hamilton equations. So if this is done with a small enough time step energy will change very little. This energy change is actually violating the balance. So it is incorrect. It should be zero. It's not we have to accept the fine time step. On the other hand, when we propagate the kinetic energy, if we can solve our differential equation exactly and we can for our thermostat, then the energy will change a lot because possibly you will have a large scaling factor. But this will not violate the tail balance. So there's something that is explained in detail in the paper, such that you can basically some all the energy increments that are coming out from the evolution of Hamilton equations or, in other words, subtract all the energy changes given by the thermostat and obtain some effective energy that is like energy of the system plus energy of the bath. That allows you to monitor how much are violating the tail balance. Let me show this in practice again this is Leonard Jones liquid. The plots are just shifted for clarity. This is energy conservation in a NVE simulation, no thermostat. And this is effective energy conservation using the thermostat and you see it behaves very, very similarly you have no systematic drift or very little drift maybe as small fluctuations. If you go to a very large and definitely incorrect for the system time step you will see that energy will drift up so you see at the beginning of this plot, the two simulations behave similarly then of course, when the energy becomes too large there are two simulations and so the system does something strange and at some point it explodes whereas with a with a thermostat we just have a constant drift that you can measure I can you can use to assess your integration errors. So that's why it is effective, you can define a sort of effective energy that includes the energy of the thermal bath. So these formulas can be used to define the effective energy also for lunch event thermostat that you can do it for no say Hoover if you do so you end up in what's traditionally been used as conserved energy for the most of the thermostat. And for balancing you cannot do it because the algorithm per se is not time reversible. This has great has been published in 2007 and now it's, it's widely available in many of the codes, the names are different so you have this CS VR in Lambson CP to K, sometimes it's called busier busier or your Parinello, or BDP, it's available in Gromax and this is T coupling equal to via scale where you can also use it. You can use the coupling separate groups independently and if you want to implement it yourself. There's some function here that could be useful. And then to summarize this part, I tried to explain the difference between the local and global thermostats why global thermostats are interesting. I introduced stochastic velocity rescaling which is the global first order so it operates fast and in a predictable way. It gives correct fluctuations allows you to monitor energy conservation and it's easy to you tune and to implement and then if there are questions on this part maybe I'm happy to take them now. So if someone has any question just put in the Q&A I have a question and we will unmute you so you can just at this moment ask question on the thermostat, please. We just give a couple of minutes maybe people has to think about okay. So we have a question from can I will unmute him. Yes, please. I permit can you can speak if you want. No. Okay, so I will read the I will read the question. Stochastic thermostat relative to a strictly deterministic one generate more entropy in a fluid or in a gas of space. By using the system toward more fluctuation. I mean, not in absolute term, but only by proportionality to the system and tell me as well. Yeah, so I would say for a liquid I would say no. It should not depend on the fact that the thermostat is deterministic or stochastic for a gas of system, the real answer is, I don't know so basically I am very suspicious that any global thermostat could work very poorly in a gas of system. Okay, thank you. So now we have William. I allowed you to speak William if you want you can speak. This is about MPT simulation so maybe it's later I don't know. Yeah. Sorry, I arrived a little bit late and I missed the first part, but I have a question about the settings to use and with a low pressure simulation we've been fighting with this for some time and we don't know. We haven't had anything to work even for NPT. We really like to do thermodynamic integration but even NPT doesn't give the correct densities. Could we take this, sorry William, could you we take it after the barostat part. Of course, of course. Yeah, so I will unmute you later. Thank you. Okay. So now we have. I just unmute you. Maybe I can read it. You can speak it now. You can speak if you want. No, you can read it. Yeah, I will. Is the drift only because of high delta T. Yes, in the sense that as delta T tends to zero the drift will disappear. So that's a guaranteed unless there is a bug in your implementation. This is also very useful at test. If there is one further someone asking if we can have the get up link, and maybe they can put in the chat so everybody can see it. Yeah, actually I will share the PDF afterwards. It's a question. I mean, split it and then please solve it just give me a quest moment that I will try to unmute Eric. So, yes, it's chance to, to, to speak. Hi, can you hear me. Yeah, please. Yeah, so thank you for the talk and thank you for the thermostat we we've used this a lot in our work. So my question is, when working with everything here is kind of been about solvated stuff and explicit solve it. How about global rescaling for implicit solvents is that still recommended or do you need those local thermostats to try to capture some of the solvent effects when doing production runs in implicit solvents. That's a very good point in that case, typically people would use like the German dynamics, it's not used as a thermostat but use like to model collision with water as you said so if you want to capture correct dynamics definitely you need to do so. But if you think about it you should do you should do so in a smart way like I don't know maybe surface atom should feel different collisions than the inner atoms in your large protein. It's tricky to reproduce this property, then maybe another related point is for a system which is not solvated who the global thermostat to be efficient. I don't know. I have no really no direct experience so the best thing is to try, try and check there are several papers recently trying to check also equilibration of individual degrees of freedom with the global term stuff. It's possible that you are in corner cases where some degrees of freedom will not be properly equilibrated with a global thermostat. Thank you. Thank you Giovanni, and I will suggest that that we go on we have Louise that has a way to there is another question on the thermostat. Yeah, if you. Okay, if you. Could you repeat what you say using thermostat for a gas system phase. We have no experience with simulation of gas phase systems and I would say that global thermostats are probably not very, not a good idea in this case. Okay, thank you very much Giovanni please. Now we go to the bar stuff. Okay, let's go to bar stuff now. So, here we start with our equations with thermostats. But now we also want to allow exchange of work with the outer world. So to do so, we need to add the volume as a dynamic variable and the distribution that we will sample from we look like this is the isobaric distribution. And then you will have to add some rules to propagate the volume and in addition to modify the rules that you use to propagate positions and velocities, including bar stuff. Depending terms here. So, what does the what is what's the bar stuff expected to do on the coordinates so clearly the bar stuff has to change the volume it could do nothing to the coordinates but this could be very ineffective why because particles that are close to the boundary would feel a very large distance so distances which are across boundaries of your box with boundary conditions would feel very large change when you scale when you change the volume and other distances would be not affected so the most conventional the most common thing is to basically scale all the coordinates and with the same factor, which is basically the cubic root of the change in the volume, then should you do something to velocities as well. This is kind of optional. And I would say in most cases velocity are scaled with an inverse factor but this is not strictly required and we'll see a bit more details. So, actually, and this detail is related to what you want to define as the internal pressure of the system so imagine when you operate this scaling. When you do this scaling operation, the energy of the system changes and the definition of the internal pressure is minus the derivative of the energy change with respect to the volume. And what you do when you change the volume affects what you're going to obtain as a definition for the internal pressure. For instance, if you scale coordinates and then velocities with an inverse factor as I said, and that's what's done in has been done originally in the Anderson, but also in most other arguments is like this. Basically your definition of the internal pressure will look like this. So if you instead, instead only scale the coordinates and this is something not very common. I've seen this in a paper only. Or this is what is done usually in Monte Carlo barostats. You, instead of using the kinetic energy for this contribution for the ideal gas contribution, you use its average value it's a subtle detail but remember we are trying to reproduce corrective fluctuation so these subtle details are very important. You have only choice you have other choices you could decide for instance to take the derivative of the energy, just scaling the center of your molecules, but without changing the internal distances in your molecule this is convenient for instance if you have system with the, with the internal constraints and I think this is if I remember correctly this is what's implemented by default in the amber code, and in that case, you should include only the kinetic energy of the center of mass of your molecules or the number of molecules only in this entropic factor and also the, the virial will be term will be computed differently. How do you implement then barostats in practice so you avoid your ametone equations you apply a thermostat and then you have to compute the internal pressure and use it to propagate the volume. And then once you decide what's the value of the new volume you should scale the positions optionally as we said scale the velocities. And again, you have many options this is the list of the most popular ones I already mentioned you can use Monte Carlo to propagate the volume. You have the balance and scheme which is again not producing correct fluctuation but it's very efficient because it's a first order, so it relax directly to the point. And then you have several other choices among which the other one which is first order is stochastic service scaling and now I will go into the details of this. So this is the equation that you will use to propagate your volume. So it looks very similar to what you have in the better and send barostat is a restoring term which depends on the difference between the external pressure which is a control parameter you set it in the input file and the internal pressure which is computed based on coordinates and positive velocities. And you have a pre factor in front that you see it depends on this tau P, which is your tuning parameter that tells you how quickly you want to relax your volume. But there's also another term here which is them as where you should plug the estimate of the isothermal compressibility of the system. And that which basically depends on how the volume changes. When you change the pressure. So you typically don't know it priori, but you could estimate it. So and you see that in this equation you only have beta T divided by tau P and also here beta T divided by tau P. So basically only the ratio matter it's a single parameter but the typical way of choosing it is to fix beta T with some estimate and then choose tau P depending on how quickly you want your system to relax. And the very important thing to notice is that whereas the question for the term of stuff could be integrated analytically. This one cannot be integrated analytically because P in depends on the volume in a very non trivial way you change a bit the volume, the energy changes because you have particles getting a different distance you cannot predict this in any way. So, this equation cannot be integrated exactly. This is an important distinction. Still, you can implement it and here I will show you some tests that these were done much more recently. 2020 with Gromax. This is also a diesel and liquid water, 3000 molecules more or less. There you see with Parinello Raman, which is supposed to give the correct fluctuations stochastic service scaling and here you have the average value of the volume as a function of the control parameter tau P, and then you have the fluctuation volume of the function of the control parameter tau P. You basically see that the better and send battles that gives you proper average but underestimates fluctuations stochastic velocity rescuing gives correct fluctuation on a wide range, Parinello Raman here it's failing at very small but again I suspect this could depend on also on the way it's the details of the implementation of the integration of this. So, there is one important comment about this beta t the estimate of the isothermal compressibility you can actually compute this isothermal compressibility by running a short simulation in the MPT ensemble and looking at the fluctuations of the volume but you don't know these in advance. So, this is the case of Landau Jones we could estimate it well it's around 0.3 in its unit less. And then if you look at the autocorrelation function of the volume for different choices of tau P, it decays basically as an exponential with the proper time scale tau P. So, this is actually at the large tau P the case slowly at the small tau P the case more quickly. Now what happens with water this is interesting because everyone I guess everyone using Gromax finds this comment saying that the, the isothermal compressibility of water is 4.5 times 10 to the minus five bar to the minus one and use that in the input file. That's actually the compressibility of real water not of tip 3p the the real the compressibility of tip 3p is 6.3 it's larger. So, this is the isothermal compressibility and as a result of that if you run a simulation with the Barrington Barostat or with the stochastic and cell rescaling Barostat, you will find that the volume will relax at the different speed actually you can make a change of time scales like this it's roughly 50% slower the decaying and after you make this change of time scale and overlap the autocorrelation function with the exponential you see very good agreement. You see, so basically if you make a mistake in estimating the compressibility of the system in advance, you will basically make a mistake in how you predict the relaxation of the volume will be. Another interesting thing that you can notice here is that if you make tau P very small, basically the volume will not have time to relax as quickly as the exponential would predict. This is just the dynamics of the volume but you have to imagine that when you change the volume particles have to rearrange if you do it too quickly particles will not have the time to rearrange and the volume will not be able to relax so quickly. What about the integration of the equations of motion, this is more, this is complicated, exactly because there's not an exact way to integrate the equation of motion for V. So, in this work we tested multiple different implementation in particular all of them are tested on this educational code that you can find this link. And here you can see again average value of the volume for difference value of tau P and for different integrators. And all these integrators will fail at some point if tau P is chosen to small. And these are the fluctuations and again you see they are compatible between each other except that they will all fail if tau P is chosen to small. And this is the energy, this effective energy. If you remember that it to before you can quantify the tail balance violation. And this is the drift per step as a function of tau P, you see that the smaller tau P, the larger this drift. Notice that the curve here is missing for the other integrated the other integrated is the one that is easy to implement is actually the only one that we managed to implement in Gromax, but formally is not reversible so formally, you should define an effective energy. On the other hand, the properties of the system like average fluctuations are compatible between these integrators so we thought, okay, it's fine it's, it would be difficult to implement these integrators properly within Gromax so we will only implement the So similarly to the case of the thermostat one could ask so. Okay, obviously if I'm interested in volume fluctuations, yes, they should be computed properly but let's say that I'm interested in computing the free energy difference. So there's this is interesting paper this is a very extensive benchmark. From a series of papers is a sample six. Where they did comparison of free energy calculations using chemical methods and in particular they compare results obtained with Gromax using the Berenson Barostatt and open MM using Monte Carlo Barostatt and they've seen kind of significant difference around 2.5 kg per mole. So, we try to reproduce this using Gromax, and we actually have seen no basically no difference between Berenson stochastic server scaling and Parinalorama no difference in the free energies. No difference in volume fluctuations but no difference in free energy so we were not able to reproduce the discrepancy so I suspect that the difference in the report could be due to some other differences between open MM and Gromax and not to the Barostatt. So my take home message for this is that if you compute free energies with Berenson or even with MVT possibly they would be correct or at least I didn't find any compelling example to show you that you need to use the proper Barostatt to compute free energies. Of course the answer might be system dependent so you have to be careful of course. And again as a side story, it was actually when this preprint came out that was in 2019 that I thought, okay, if really there is this problem with the Berenson Barostatt we should find a way to fix it and actually I had like in my drawer since five years the equations for the Barostatt so that's how the old thing restarted in 2019. So this Barostatt can also be implemented in a semi isotropic formulation with this in the first paper, here basically you can separately control the length, the height of the box and the area of the base of the box. So this can be very useful especially to simulate membranes or interfaces in general. And in particular you can also control the surface tension so you can sample from the so called MP gamma T ensemble. We did some tests to make sure things are kind of reasonable we obtain fluctuations which are compatible with the Parinello Raman implementation and, and as a negative control we have that Berenson predicts a significant number of fluctuations so we were happy, but also I'm not really an expert of these types of systems so I would be happy to receive feedback some people that tried these on other interfaces. You can make your life even more complicated if you want to study periodic solids like crystals, because here you would need to take into account the full bulge box shape so you would have nine degrees of freedom to change not just one not just the three vectors that generates the periodic lattice. So, the pioneering work in this field was a work indeed by Parinello Raman, where they propose the way to simulate these cell shape fluctuations at the proper temperature. There are other algorithms in the literature actually Gromax only implements Parinello Raman. So, and so we at some point we said okay can we generalize our equations to the fully flexible anisotropic case, and the answer is yes. So you can you obtain similar equations where you have again like Berenson like restoring term here the internal pressure becomes a stress tensor, and you, you are able to apply either external hydrostatic pressure or an external stress. And again you will need the proper noise to be added in order to get the right fluctuations. So, here we run some comparison. This was done using Gromax, as I said Gromax only implements Parinello Raman for the fully anisotropic case. And here you see that so your trajectories are shifted for clarity, but so the averages are actually equal to each other but you see that fluctuations in the Parinello Raman are really equilibrating very slowly. And I don't know if this is due to the Parinello Raman equations or to the way they are implemented Gromax I really have no idea. In the anisotropic case we were able to compare with also with the MTTK which is Martina to be a Stackelman Klein Barostat, which is available only in the anisotropic case, which gives the results that is similar to our stochastic cell rescaling approach. So, here we have auto correlation functions for different value of tau, and also comparing results obtained with Gromax and lamps. So, the message here is that basically the stochastic cell rescaling here as always like this nice exponential like decaying. The stochastic cell is really just relaxing an exponential way, whereas other algarits will have fluctuations that could that where they could be dumping a different timescale and they could also have a different period, depending on how you set up. apply an external stress. So this is an example with gold, where we applied an external stress to see the formation in the unit cell until the system at some point breaks and you see a slip here. And then we were able to test these different implementation, different bar stats and also static calculations to body data we were obtaining correct results. So what about availability? So for the isotropic and semi isotropic versions, they are basically available in Gromax 2021 app. So this is just the keywords, zero scale. For the anisotropic version, we did this work using a custom modified version of Groma 2021. And so we distribute a patch if you want to use it. And we have now ongoing the review of a merge request on the master branch. So this implementation for the fully anisotropic case could appear in a later Gromax version. Then to summarize this second part, I introduced to you bar stat and in particular stochastic service scaling. It's a first order. It gives right fluctuations. It has one parameter actually apparently two, but you've seen that only the ratio matters. You can define effective energy conservation, even though these require to implement some non-trivial integrator and can be generalized to crystalline solids. And finally, let me thank my collaborators for this work. This has been work done across decades. So the work on thermostats was done when I was still a postdoc in a group of Michele Parinello. And I also want to acknowledge David D'Andio who was helping me a lot, like teaching me how to simulate water. The work on stochastic service scaling was done by a postdoc in my group, a former postdoc of my group, Mattia Bernetti. Whereas the anisotropic version was developed by a master student that was visiting my group in Torre del Tatto in collaboration with the Paul Wright theory from El Carten. And then thank you all for your attention. Thank you very much, Giovanni. It was very nice and clear presentation. Now we go to William that he had the question. So I will unmute William. So he can ask his question. Yeah, please, William, you are allowed. And sorry, in the meantime, I will ask everybody to write if you have a question on the Q&A. Okay, I just unmuted myself. Can you hear me now? Yes. Good. Yes, actually, I think someone almost asked the same question earlier on. It's really, we're trying to do simulations in the gas phase. We want to do NPT and we also want to calculate chemical potentials basically by thermodynamic integration. And nothing seems to work. We, when we know the right answer, this is very low pressures, say 100 Pascals or something like that. Any suggestions? So my suggestion will be that if you are in the gas phase, so for the, but I'm really guessing. So they give you the grain of salt. What I would do is if you want to compute thermodynamic quantities like chemical potential, I would suggest to use Langevin thermostat and to use the steel, this stochastic cell rescaling thermostat which should be stable and should work well even if you have a gas phase system. Because in the end the volume is just one variable. Instead for your particles, you cannot assume that they will be coupled enough for a global thermostat to work. That's what I would guess. I'm sorry. So you can see the Langevin thermostat and what barostat? I would use, I would try this CRSK barostat. What about a Monte Carlo barostat? We thought about that. Yeah, that should give a very similar result. And would be equally good option, I would say. The main difference is the way you tune parameters. For Monte Carlo barostat you typically set the step size for this CRSK, you typically set a relaxation time scale. But I would say I would bet they would give very similar results. And they would have very similar efficiency, very similar properties. Okay, thank you very much. Thank you. Now we have another question from Khan that I could not unmute before. So you can read the question loudly, Giovanni. Yeah, should I read the question? Yeah, so could the barostat cell rescaling steer the system from the Maxwell-Boltzmann distribution perhaps by creating the case which radically violate the Boltzmann distribution at set temperature? I don't think so. So all these moves are done at equilibrium. So provided that the tau P is not too small, in which case you basically will fail in the integration, I don't expect that the barostat will induce violation of the Boltzmann distribution. Okay, thank you very much. Please, if anybody else has a question, please type in a Q and A. Okay, I don't see question appearing. So I will just... So we just wait a moment if someone has another question. No, so I want just to copy. So if you have other question that they're popping up, you can always ask to Giovanni via the webinar link on the forum. So then now I will post it. I will post. So you can see here you can go further to ask more questions to Giovanni. I put there all the questions that you have asked so Giovanni could also answer again in this written form. And I thank you everybody for the attendance and I close this webinar section now and we will go back in spring. So the BioXL webinar for the winter are going on holiday and we will be back around February. Looking forward to see all of you again. Thank you very much for joining. Bye-bye. Bye-bye.