 Welcome everybody to the BioXL webinar and today we have a student webinar, in particular is the student that won the poster prize for the summer school edition 2023. And the three winners are Daniels Surswequa, Eleonora Serra and Ricardo Scarin. And I'm hosting the webinar, I'm Alessandra Villa from the Royal Institute of Technology and with me it's Otto Andersson from the Finnish IT Center for Science. So, as you can see I was speaking come from different parts of Europe. Daniel come from the Eideberg Institute for theoretical study Eleonora come from the ITT, Institute Italiano of the Technology at Genova, and also she's working in Seca Melossin. And Ricardo also works for a pharma company Atlas Molecular Pharma SL and also for a research center CAC Bionune Bilbao. So, the webinar is recorded just that you are informed. In this webinar, you can ask questions to the speaker so I will ask you to use at one at two at three to address specific Daniel, Eleonora or Ricardo maybe you have a question for everybody don't put anything. And you have to use the function of zoom that you find at the bottom of the zoom application depends on which operating system you have you might see this symbol, or you might have this symbol. You just click and then you type your question you can type the question whenever you want. At the end of the three presentation, I will unmute you if you have a microphone so you can directly ask your question, or I will read for you the question to the speaker. There's something about the speakers of today. So Daniel will speak about how a stretching force difference to stabilize the chemical bond on a protein backbone, while in honor will be speaking about bidirectional path based non equilibrium simulation and in free energy estimation. And finally, Ricardo will speak about development of pharmacology chapter on for the treatment of tyrannism. Oh, sorry, Tyrocymenia type one. So, now I will start to give the word to Daniel. So can you see my slides. Perfect, we can hear you and you can see all your presentation please go ahead. Okay, so today I'm going to talk about my PhD project. And in this project we try to describe the effect of an external force that a pool molecule, and then it could desistabilize the bones in the backbone of peptide. And this is important to study because a basically stretch molecules are everywhere in material science, basically all materials who load load any external force are submitted to stretching of the molecules. And in particular in biology, when we moved any muscles, then our tissues are stretched, and that basically means that a hour. The molecules inside are also stretched, and then understand what's happening inside of this stretch molecule could give us some clues of some to solve some healthy problems and so. Now this is basically the broad problem but the more specific problem that we have to what we want to address is, which are the most unstable bonds in a molecule that is stretched by an external force. And then to do that, we start assuming that the change of energies when we stretch the molecule, the total change of energy can be decomposed to the change of energy in each one of the degrees of freedoms. So, we have here the expression that we are using to do such a discomposition. And then the index I respond to the different degrees of freedom and when I say degrees of freedom I refer to distances and goals and that he does, then our model is predicting how the energy is distributed in these degrees of freedoms. And the idea is to do this, a sort of solve this problem using QM approximation so the level of accuracy of the of DFT. And but the Q problem, the key problem is to solve this integral here, where this fi correspond then to the force in the direction of the degree of freedom and the Q would be the different degrees of freedom the values corresponding to the degrees of freedom. And then we have this equation and for solving this question that we have two options. One of them is called Jedi that uses harmonic approximation. And this one basically we assume that around the minimum we can do an harmonic approximation and have some prediction of the distribution of energies. And the other solution that is a solution that we propose is to use this method that we call safe. And in this one basically we do smaller stretching, such that we have many intermediate stretch configurations. And for the final distribution of energies and we do a numerical approximation of the distribution of energies. So we are going to use this method, the sift method because if we compare then the distribution, the summary of the distribution of energies, the summary of this term that it will be equal to the total change of energy so basically, we can compare the result that we are taking from each one of these the composition method with the total change of energy from the DFT. So basically here, the method using sift is way closer to the reference value, and then we are going to use this method to do this description of the distribution of energies. So for example of the kind of result that we have, then I will going to show the example of three Alanine, this is small peptides with three Alanine's. And the normal procedure that we do is that we start from the optimized configuration, we optimize the structure using DFT. And we basically increase the distance of the stream atoms by a value of delta D, and then we constraint that distance, and we re optimize again using DFT. And then we repeat and repeat the process a number of times until we have different stretching, and then we can do the numerical integration that I showed in the previous slide. So the result that we have is something that I show here. So here I show the change of the energy or the energy store in each one of the degrees of freedom, respect to the degrees of freedom here are all the degrees of freedom. And then correspond to the degrees of freedom of distances. This one are this part here is the degrees of freedom of angles and this one of the heat rights. And as you can see, then the gradient of colors correspond to the stretching step. The gradient stretching step then the energy store is completely zero. That is what we expect if we don't stretch the molecule then there is not any distribution of energy but then when we start to increase the stretching of the molecule then we start to see the distribution of this external energy that is being introduced to the system. And here we can see something like the first or the most made for results. That is that the distances stored most of the energy. While data he draws a degrees of freedom doesn't store a, a much, much energy. And also we obtain we continue this stretching steps and step of stretching until we got a rupture. So a rupture means that the distance between two pair of atoms increases a lot and the rest of the molecule got relaxed. And then we obtained that the rupture happened always in the distance that it stores more energy. So basically what we are telling is that the degree of freedom obtained that stores most of the energy is the most probable to be to suffer a change in the distance is that it would mean that the distance is storing most of the energy is then a is going to suffer the rupture in some point. And we do this kind of analysis not only for a three Alanine three Alanine as I said before it's just an example of the kind of procedures that we do, but instead of that we can have a different combination of amino acids, in particular we are having three order basically three order amino acids and do remove the effect of the capping atoms, then we only consider the distribution of energies in the amino acid of the middle. So in this case, if we do different combination for example in this one, LKD, then we will have only the distribution or we are going to analyze only the distribution of energies in the K amino acid. And the reason again is to remove the effect of the external of the capping atoms with this database or this basically this set of different combinations of amino acids then we want to create a machine learning algorithm that predicts the distribution of energy for different stretching steps for different configurations. And then we can apply this method this analysis method for larger system because of course, in nature, we are never going to find or not like a very often a such a small amino acid but we such a small peptide but we usually find something like way larger systems, like proteins and so with the machine learning algorithm, we expect to have a prediction of the distribution of energies for larger system and the scales of a molecule proteins and so this the database that I mentioned before this being under construction yet, but still we have some preliminary results. So what I am showing here is that the energy store, specifically in the degree of freedom corresponding to the distance between the alpha carbon atom of the background and the nitrogen atom of the back bone again, and the alpha carbon and the C atom of the back bone. So, as we can see here, basically, the first result that we obtained from the comparison of one and the other is that the C alpha C bond, this is sort of way more energy that would mean then that this bond is more probable to be broken in comparison with this bond. And another result that is also a basically a one reason to or so to say a highlight of this result is that for different amino acids, each line is a different configuration, a different set of amino acids, different can store different ways, the energy in the same bond. That would mean that different configuration, if we have different amino acids, then we will have some specific amino acids that are more probable to be broken. And then what we have is that we implemented a method to obtain the distribution of energies in a stretch peptide. In particular, we can predict then with this distribution of energies, which bond will be broken so for example in this case we have for every stretching a different distribution of energy and the bonds that is storing most of the energy is the first to be broken or the most probable to be broken. With this information with this method, then we are creating a data set to train machine learning algorithm. And in the near future we expect to have some results with this engine to show to the world. And, yeah, but so far with this analysis with this data set we have, we can conclude that for different configuration of amino acids, different order of amino acids we have different distribution of energies. And finally, I want to say thanks a lot to the MVM group, led by Professor Pralki Greta, that is the group where I'm doing my PhD, and especially thanks to the bio Excel organizers of the summer school. So, yeah, that's it. Thank you. Thank you very much. And now we go on with Eleonora, please Eleonora. Daniel, could you stop sharing. Thank you. Yeah. Okay. Can you see the screen. Perfect Eleonora we can hear you and we can see everything please. So, good afternoon everyone. Today I will present my PhD project and that is done in collaboration between it and the second center. And it is titled the two directional pathways non equilibrium simulations for binding free energy estimation. So here I did the little introduction about the central topic of my research. So binding free energy is the central quantity in the drug discovery field, because it's able to quantify the strength of interaction between a legum and the receptor. The protein legum process, the complex between the legum and the protein is an equilibrium with the free form of the protein and the legum. And starting from this equilibrium, the gift sign in free energy can be computed. This quantity is related to the KD, so the dissociation constant of the complex that can be evaluated experimentally, but it can be also evaluated computationally. And among all the computational tools that can use all atoms molecular dynamics are the most suitable. In this way, the binding affinities and the free energy that can be evaluated from simulations can be used to optimize promising compounds along the drug discovery field. However, evaluating computationally binding free energy pose many challenges, starting from the complexity and the size of the systems that we have to simulate, but also because of the wide range of the time scales of the phenomena, starting from the femtosecond for the bond vibration and reaching also the seconds for the protein legum binding process. For this reason, we cannot use a standard molecular dynamics, but we have to use an ensemble sampling methods because they are able to accelerate their events and to expand the approachable time scale. An ensemble sampling methods can be divided into different families, method based on physical path and method based on physical path. The first family of methods relies on the fact that the free energy is a state function and they follow alchemical transformations. They are often used to estimate relative binding free energy. They are in some way simple to be implemented, but they don't offer any kinetic or binding information. On the other side, the method based on physical path required identification of a collective variables. They are used to define the potential of mean force and they reveal kinetic and mechanistic information. However, the definition of the collective variables can be very difficult. And among these two different methods, we decided to use method based on physical path. Then after the definition of the methods that we use to sample their event, we have to define the estimators and the free energy estimator binding free energy estimators can be divided into class equilibrium free energy estimators and non-equilibrium free energy estimators. In the first family, we can find the free energy perturbation approach or the Bernat acceptance method. While in the second family, we can find the Georgianski estimator and the Cooke's identity. And what we decided to use was to use this kind of estimator. So Georgianski equality is a simple equality and it's able to relate the work done during a transformation to the free energy of the transformation itself. So in our case, they work done during the unbinding transformation to the free energy. The Georgianski estimator is fairly simple to be implemented, but it can be biased, give an immediate number of observed work values and exponential average is sensible to our events. For this reason, we want to use a two-sided estimator as the Cooke's fluctuation theorem because it's known to be more accurate. And here I presented the Cooke's fluctuation theorem. And as we can see, it's simply a relation between the forward and backward distribution of the work values and the intersection point between the two distribution. In our case, the work values for the unbinding event give us the binding free energy. So after the definition of the sampling approach and the estimator that we want to use, we have to define a computational pipeline for computing and binding free energy. And what we decided to do was to take a pre-existing pipeline, published some years ago in our lab, and to improve and modify it. So in the first step of the pipeline, we use the non-sensamic methods to generate a preliminary path describing the relevant of interest. So what we decided to use was adiabatic bias molecular dynamics with an electrostatic-like collective variable to describe our unbinding and binding events. Then, among all these preliminary paths, we select one trajectory which bears the smallest simulation time and it is mechanistically sound. And we call it our initial guess path. The initial guess path was then used in conjunction with two algorithms, the principal path algorithm and the distant waypoint algorithm. The first one was able to clean the putative minimum free energy path and it was composed by consecutive conformations capturing the relevant. And then the second one was able to define a minimum free energy path with uniform spacing in terms of density between the consecutive frames. In this way, after this step, we obtain an optimized minimum free energy path describing our unbinding event. Then in the original pipeline, they decided to use world-temperate meta-dynamics in conjunction with the path collective variables. That is an in-rently serial algorithm. But what we decided to do was to use instead of world-temperate meta-dynamics, steered molecular dynamics to perform simulation following the path for the unbinding and binding simulation. Because we know that steered molecular dynamics is trivially parallel. Then after the use of steered molecular dynamics and performing multiple replicas, we computed the standard binding free energy as a sum of two terms. The binding free energy from simulation and the standard volume correction term, where the binding free energy from simulation was simply computed as a ratio of the partition function. Okay, so here I try to represent what does it mean to use path collective variables. Path collective variables are a specific type of collective variables published some years ago by Bernoulli and coworkers. And they're called S and Z, where S describes the progress of the system along the path, while Z describes the distance from the path itself. So here I try to represent better. So after the use of our algorithms, we define a minimum free energy path and describing our unbinding and binding events. And then by using the path collective variables, we were able to perform steered molecular dynamics for the unbinding and binding events. Okay, so which systems we decided to use? We decided to use three different systems, increasing the complexity and the size of the system. So a simple host gas system as a toy model for fine-tuning and validate our pipeline. Then trypsin-vendamidin as a benchmark system for free energy strategies. And finally, a BLT-Rosin-Chinas and Glevec complex, because it is an interesting therapeutic system. For CB8GA, we follow the stage of our pipeline. So we've tuned in the robotic bias molecular dynamics. Then we optimize the minimum free energy path. And then by using the path collective variables, we performed 50 replicas of binding and unbinding events at five different simulation times. So 10, 25, 50, 100 and 100 in a second. After the use of steered molecular dynamics, we reconstruct the work profile of our transformation. We applied our estimator and we reconstructed the potential of mean force. To reconstruct the potential of mean force, we applied the estimator for each value of the S variable. And in this way, we obtain a point of the free energy surface along our collective variable and so a point of the potential of new force. I have also to specify the fact that instead of using the simple Krux-propagation theorem, we realized that he has the same mathematical formalism of the Bennett-Septown ratio method. And so what we really implement was the Bennett-Septown ratio method replacing the work, the internal energy with the work values. Okay, so we applied our estimator, as I said, for reconstructing the potential of mean force. And then from the potential of mean force, we had to define a discriminating frame that was able to discriminate between the bound and the bound states. And we did that through a visual inspection and analyzing our potential of mean force. After the definition of the frame, we were able to integrate our potential of mean force. And finally, we compute the binding free energy as a simple ratio between the partition functions. Okay, so the first results that we have for CB8G8 are here reported. And what we can see is that considering an experimental binding free energy of minus 13.5 kilocal mole, we had a good agreement between our computed results and experimental data. And increasing the simulation time, the Krux-propagation theorem was able to converge towards the experimental value. And considering our second system, trypsim and zamidine, we follow the same stages of the pipeline. So we performed the 50 binding and unbinding replicas. And then considering the results, considering an experimental value of minus 6.2 kilocal mole, we could see that the free energy computed by the Georgianski estimator were an upper and lower limits of the experimental value. While the Krux-propagation theorem had a good agreement with experimental value, and we could see a good equivalency and a good converters feature for the Krux-propagation theorem. Moreover, what we tried to do was also to define the sensitivity of the binding free energy with respect to the discriminating frame. And we could see a low sensitivity of the binding free energy with respect to the discriminating frame that was used to define the bound and nonbound states. Finally, for Abial-Glivak system, that was the most complex one, we did this initial stages of our pipeline, while the 50 replicas of binding and unbinding events are still ongoing. So as a conclusion, we could say that for small and moderated sized protein-negan complexes, we found that the Bernat-September ratio estimator coupled with the possible active variables is able to converge. And our approach gives accurate binding free energy values in conjunction with mechanistic information from the past. So the integration of the past collective variables and two-directional steel molecular dynamics enables both much faster converger rates and trivial parallelization of simulations, significantly reducing the overall time required to obtain accurate binding free energy estimates. Finally, for bigger systems having pharmaceutical interests as our Abial-Glivak complex, we are now in the process of challenging our approach. So I want to thank my supervisor, my collaborators, and thank you for your attention. Thank you very much, Eleonora. And now we go further. If you could stop sharing, thank you. And now we have our last speaker. Ricardo, please. Okay. Can you hear me? We're here, you're a little soft. Okay. Is it better now? Yeah, perfect. And we see if you can go on full screen, we see your presentation. Yes. One second. I just, okay. I'll try to share again then. Yeah, sure. Yes. All right. Okay. Okay, I want to speak about the development of pharmacological chaperone for the treatment of a tyrosinemy type one. So to understand this project, we have to look at the basis that were created in the lab to understand the pathological phenotype of tyrosinemy type one and the characterization of the target and the fragment screening that was done. The conclusion was in rational design and evolution, but to understand it, let's look at the background. So, and what is tyrosinemy type one, if we look at the catabolism of tyrosin, we see that several enzymes catabolize the final leases of tyrosin to acetate and fumarate. And if this enzyme failed to express this function, we will have different type of accumulation of metabolites that can lead to tyrosinemy type two or tyrosinemy type three. And the worst one is tyrosinemy type one if the last enzyme failed his function. Because these lead to the accumulation of fumarine acetate acetate, and the accumulation of a byproduct that is such a new acetone, this molecule is highly toxic is in use epatococcinoma and the people that as these mutations can die in two years old. So, I think there is a drug that block the production of susanacea acetone by blocking the catabolize of the tyrosin at the very beginning. So, in our lab, my colleagues characterized the target that is FAH, fumarine acetate acetate in relays. This is a dimeric protein of 96 kilodaltons. In our lab, colleagues find that mutation does not impair the catalytic side or better, most of the mutation are destabilizing the dimeric form to the monomeric in increasing the aggregation does taking out from the equation, the protein in the cells. So, in fact, modern patients that is a glycine 337 to serine. With case per cast model on hex cell, we see that with this mutation in Western blood, we don't have any protein. This is valid also for mice, we have mice model in which we see that the mice after 20 days die in agreement with the phenological pathology in the humans. So, let's go in the more computational part we did the fragment screening. And we like make a list identify catalytic side and a dimer interface and they screen several fragments with docking. And the best hits where the screen, mainly with the contribution of john jim Martinez by methyl drossi, and this give an idea of where it is the binding side. Which are the best find. In fact, he could co crystallize one of these fragments, and that's where we start from the fragment evolution. So, my do for this fragment evolution we use a symbiosis between docking ITC STD NMR and crystal. So, we start by molecular modeling we look at the catalytic site, we see that the substrate natural substrate is a polyket of acid and the coordinate calcium. And our compound is quite different in nature is hydrophobic. We have a hydrophobic bind as a hydrophobic patch, neither for patch where it does five by stacking with the finish line. So we have a crystal structure with the part of the substrate and our fragment, and we characterize the binding by ITC and it actually confirmed that the binding is mainly in a entropic. We see at the delta G contribution, the red part is the entropy of binding is quite high. So we want to create more specific binding, and we do that we do so by copying nature. So we try to enrich the scaffold by adding carbonyls and moisture that can be that the protein develop a taste for during the years of work. So, what we do we select some compounds, and from the same database, we screen for compounds that have 20 around 20 atoms and repetition coefficient between two or three, and we select the bench of these compounds and dock them and look at the pose to understand the pose can have a rational meaning. And then we select the 10 compounds only so it's a low basic screening because we will look for interaction by striking region in the from the patch and distance in between nine to 13 Amsterdam. Like the one that is in between the substrate and our heat from it. And we select plausible pose, we buy these 10 compounds, and we screen by STD. Luckily, four out of 10 of this compound shows STD, but still, STD can be displayed at five millimolar KD. So we are not happy, we want to verify that this compound actually improved the binding. We don't met him to see which part of the compound goes inside the pocket. Why STD you can see that the carboxylic acid was the one going more inside the pocket. So, we have a confirmation by ITC that the enthalpy contribution is increasing the green, the green column. And also the binding is the pattern the KD is lower is 120 micro molar. And so we are happy about that. And we will keep the same rational design path. We reproduce the same exact selection of fragments and docking, and the input fragment from the docking the best pose will be used as an input fragment to search for other similar compounds that the Tani model coefficient 0.7 and at the end we select, because we want a low budget screening other seven compounds that maintain the pie stacking and display the carbon is in a in a way that can make sense in the binding post. So, one of these compound we five out of seven display STD and one of them display a lot of STD effect in an MR so we tested by IDC and we actually got to the non molar range. Well, micro molar non molar range. And the binding is a contribution is a more entalpic than before. And in fact, this compound. We have a recap of the process and we see how did we increase the binding so they are the affinity. We lower the KD and we increase the entropy of the binding. And moreover, we see that this was the original crystal structure. We can soak the compound in the crystal structure and we can see that it fit exactly how was the docking pose the pie pie stacking with the freedom in 141 and the coordination with the carboxylic acid. The crystal structure as a round square deviation from the docking of only 0.55. So we can say that the docking model predicted the crystal structure, and we are definitely really happy about this because as you see there is the electron density and the binding diagram. And the resolution of the crystal structure with 1.2 Armstrong, and we want now to assess if, if this model is working like the stabilization of the protein is real, because it can bind, but we don't know if it gives a sharper effect. So we use an experiment that is those experiment diffusion oriented and MR, and in which we look at the rotational coefficient of the molecule in solution. So we take the proton in between 0.85 BPM to one, and we check how fast the movement solution. So the bigger the rotational coefficient, the smaller is the molecule. So of course, the, the blue one is the mutant. And at 41 degree, we see that the mutant is is turning faster in solution. So it means that is in the monomeric state more the population is shift to the monomeric state, but the wild type is not true. The violet one is stabilized in a dimeric state, because it has lower rotational coefficient. So, when we add the compound to the mutant, the red bar, we see that the regulation is shifted through a dineric state. That means that dynamically, our compound stabilized the dimeric state. So we can prove the kinetic of this disabilization by doing aggregation assay by checking the percentage of four ratio using time. And we see that without our compound, the mutant is degrading in a certain rate, but when we add the compounding solution, the decay of the signal is quite slower. So we have 70% more of the signal after four hour in three replicas. And we confirm that as a shaperon effect, at least in vitro. We also did the collaboration with Gonzalo group. We did the screen of two million compounds in which three compounds were successfully characterized in the lower micro molar range, and two of them were co crystallize. The future perspective of this study is to find results in vitro and mice model that I already show and create a new chemical entity with the good pharmacological propriety. I would like to thank you for your attention and the riser of by excel, and also the group in which I am working and is a really big group of metabolism and precision medicine. Thank you. Thank you very much. Ricardo. Thank you, Daniel. And now we can open for question. So there are some questions. Please go on to ask questions in the Q&A. We started with a question that we have and I will unmute the Antonio so you can ask the question. I just let me time to pick up the participant list or maybe opto can help me. Yeah, so we can, I can allow you to speak Antonio if you can ask directly your question. Hello, do you hear me. Yes, if you can speak a little louder, it will be great. Please. Okay, do you hear me. Perfect. Okay. Now I was just wondering if a little I could explain again what was the past collective variables and how did she define it for in her system. That's the question. Thank you. Thank you. Okay, so hi Antonio. Past collective variables actually are implemented in plund. So what we did was to use our algorithm to define the path that is simply consecutive frames describing your events. So in our case, the unbinding and mining event, and then the past collective variables implemented implement allows that allowed you allows you to define the fact that you want to follow the specific path. So s is an equation that identified distance from, let's say, in simple word, the frame in the path and you're the real position of your system during the simulation. And so you define this distance and you put the wall, a potential that allow you to have to follow the real path. So it's like you are defining a tube and your system will follow the tube and the same for that. So as some is to follow the distance in this way while that the tool, the minimize the distance in the other direction. But they are simply equation implemented in plund and you define a force to keep the system follow your path. I don't know if it was clear. Okay, so plund pretty much and make the path for you right and you define the path and then with plunder you follow the path by using s and z. Okay, okay, perfect. Yeah, that's not so thank you for your talk. Thank you, Antonio. And now we go to blood. I will unmute you wait just give me a moment. Yeah, you should be able to speak, please. Hi. Thanks a lot for this very nice talks. I was wondering from the second talk about influence on the pms of several parameters for the SMD simulation so the pulling speed for instance, the number of SMD simulations. Also, if you have a path that is kind of has a large volume, for instance, let's see you have a large channel of this sample. Will that also influence your pms and how, how will they do fancy. Okay, so thanks for that. Regarding the number of simulations we started with a small number of simulations and replica for the mining and mining event. And then we want to reach a convergence point. So we perform good strapping and we see when we could have a good number of replicas that tell us to reach convergence. So yes, of course, the number of replicas influence your results. Also, because when you apply, for example, Georgiansky identity, you have an average ensemble average so you have an influence. You have the right number of replicas by performing strapping and by defining the number of replicas that could allow us to reach the convergence of the value. While the second point was the velocity. If I remember, and yes, the velocity is very important because if you go very fast, for example, during a still molecular dynamics, you could have a very high value of dissipated work. And this could be a real problem when you are using out of equilibrium estimators. So we started with 10 or second simulation, for example, but we had a very high value of dissipated work. And for this reason, we decrease the velocity so we increase the time of simulation. So these affect a lot the potential of reinforce and also the final free energy. And then I don't remember the last point, sorry. If you have a system where you have a large path with a large volume to sample, so it's not a direct route, but you have a lot of degrees of freedom along the path. And I was wondering if the methods are applicable for that situation. So in this moment we are using the two path collective variables and we define as I told before, a sort of tube. So the volume, it doesn't really affect in this way, but it affects in the way that when I, I don't know if I was clear, but when we want to compute in the end the standard binding free energy we have also to make a volume correction. So we define one correction term, and to do so we have to define the sample volume. Maybe I have a slide that could be, could help what I'm saying. Because, yes, here I don't know if you can see it, but when we want to do the correction, the volume correction, we, sorry, sorry, could you put it in full screen so we see. Yeah, of course. Thank you. Yes, so when you want to perform standard binding free energy, as I told at the beginning, we have to define a standard volume correction term. And here the volume is very affecting because we want to define the sample volume for the binding and unbinding. And so we use the software that is able to define this volume and then we put the, we did the ratio. So in this way, the volume could affect because we want to make the ratio of the unbound volume and the zero, that is a standard volume, but we did that for computing standard binding free energy. So I don't know if he's answering your question or if you want another point to maybe I lost your point. I don't know. In this way volume affected the our results for sure. So if you have the possible reorientations of the, of the weekends, for instance, within that volume. And that's sorry, could you speak a little loud because we couldn't hear you. Thank you. So I was wondering more about the case where where your legal would reorient so you think of a legal that it's more like a tubular shape and goes out but then during this process and kind of reorient right because it's possible. But would that be kept, would that be possible to simulate or you will just track the league and be in a certain orientation out. Our, during our state molecular dynamics actually the league and follow the path, thanks to past collective variables. So if there is not a real orientation along our path, if we put high force to follow the path. That during the steering molecular dynamics we could have a bigger orientation of the league and because the system is forced to follow the initial path the minimum energy path. Thanks a lot. Thank you. Thank you. And so while people maybe think about other question we have 10 minutes still, I have a question for Daniel. I'm wondering if you have looked on the fact so you say that you use triplet three residue. There is to calculate the value for the middle one. Did you check if you get different value and depending on which are the neighboring in your case. Yeah, indeed. That was one of the test. I wish I had my screen again. So, yeah, so for example, in this case, we not only tested the system for three learning but for larger system, not only Alanis but as following the previous example then I show here also larger Alanis. So this is a fight Alanis. And this basically we want to obtain something like the effect of the kept atoms, the cabin atoms. And if it's an infinite Alanis then we expect the same distribution for the old Alanis. And as we can see here, the three of the middle, they have the same distribution, but this one, for example, that is closest to the cabin atom then it has a different distribution. And basically this result concludes at the same for this one. So with this result, we can conclude that these cabin atoms is only affecting the first one because the rest is basically invariant to the effect of this part here. But this is the limit, something like how far. Okay, but so that you mean if you have an alanine with the between two treat the funnel you have an alanine two between two feet feet and in. Did you expect to have the same to find the same value for the alanine. Because because basically we obtain a we made the same kind of calculation for other system not only. Yeah, we obtain basically the same that the middle is unaffected by this part or whatever we change in here, something like the largest, the largest change could be with the first neighbor, not put it on that. Okay, okay, okay. Thank you very much. And I have also a question for Ricardo. So at the end of your presentation you speak you say that you want to do as I understand that you aim to have a two million compound screening. We already did that. Yeah, yeah, I was wondering how much time and do you need for such a screening. I'm from Gonzalo. We have a cluster that has several GPU. I don't remember exactly. Exactly the capability but it's quite large. The storage I think is one petabyte of storage and the GPU I think 100 and 150 and 12 Titan. Yes. So in this setting. It was quite fast one week. And it was flexible docking so semi flexible. Okay, so it was playing semi flexible docking you took one week for for two million compounds that are compounds like the one that you show so they are still drugs. Yeah, yeah, it was like, it was actually divided in, it was divided in a different patch. So you went from 10 to 15 from 15 to 20 atoms from 20 atoms to 25 25 to 30. And then back it was run and several times on the base of this crystal structure that I obtained with my best heat. Yes, but at the end, my screen was smaller. My screen was really smaller. I think in total what this process of iterative processing which I look at every post one by one, because I stress the scoring faction is no, it's not giving you really prediction of the binding or is is a weight in more the barred surface area than real meaningful connection in the pocket. So I look one by one and it's, I think it's been 300 molecules and in total from two million. No, no, no, this before the two me before that. Okay. Okay, I thought that's from the two million you will end up going to think some 300. Yeah. The two million. In order to, to search for the best hits, we of course like some filter based on the previous finding. Okay. So, but in between 15 and 25 atoms, the molecule that I have to crystallize was in the best hundred molecules. So this does like look always to the post because it's the best. Yeah, yeah, did you also plan to improve your scoring function. I did. I did try a different algorithm and different scoring function. So, gold out of being a MOE plants. I saw some point I saw like which one was reproducing the extra traction better. And I use that there's more in this case was. Okay, but you think it's not an absolute choice. It depends on the system that I am always. I, I think the correct approach is because at the end I use it as a tool and not. Yeah, no, you didn't develop me. I understood. But the correct approach would be to do a benchmark of different. If you have a crystal structure, you do a bunch of benchmark and you find which algorithm and scoring function predicts better. Your pause your real pause and then with this algorithm use it to find new molecules. Okay, yeah, yeah. Thank you very much. So I don't see any other question if someone has a question or put a raise and in the meantime, I will just share my screen and announce the following webinar. If I still have my presentation or just give me a moment with that pickup. So the next webinar that the next by itself webinar will be the 14th of November at always three o'clock but this time, not summer time and winter time. And it will be, we will have a Giovanni Bussi from CISA speaking about thermostat and barostat. Okay, so if there are no other further question. I will, I guess I close this session. And I, I thank you everybody, both the three speaker and all the attendees and see you in the next occasion. Bye.