 Hello everyone and welcome to the next edition of the IOXL webinar series. My name is Rosson Apostolov and I will be today's host. It's my pleasure today to have a presenter from Piki Technologies, Walter Rokia, and he will tell us about some exciting new methods that they have been developed. Before we start with the main presentation, I would like to tell you that this webinar is being recorded and we will post a recording of this presentation on the BioXL YouTube channel and you will have access also to copies of the slides tomorrow or in a couple of days. Before we start, for those of you who are not very familiar with BioXL, I would like to give a very short overview of the activities in our center. BioXL is a European center of excellence for computational biomolecular research and we were established two and a half years ago. In the center, we work on the development of several key applications that are widely used for biomolecular simulations such as Chromax for integrative modeling and docking Hadock and for hybrid QMM methods such as CPMD. We also work on the development of efficient workflows that help with automation of common tasks that you might find very useful yourself. We work with very popular platforms such as Galaxy, Climb, Taverna, OpenFacts, etc. We also provide extensive training and consultancy services which you can find about more on our website. Something that might be useful for you is we have focused our work in several different interest groups that you might find appropriate for your type of area of research such as integrative modeling for those of you who are more interested in docking or free energy calculations for drug design, etc. We have interest group on workflows, applications for industry and you can find more about those on our website. Feel free to visit our forums and we have an open chat channel if you have any questions. At the end of today's presentation we will have a questions and answers session where you can ask any questions to Walter during his presentation. For that, please use the questions tab on the GoToWebinar control panel on the right-hand side. So during the Walter's talk feel free to write any question that you have and at the end I will give you the microphone, I will let you speak directly with Walter and ask your question. If we have any problems with audio, I will read the question on your behalf. And it's my pleasure to present you our speaker today. Walter Oje is graduated in electronic engineering in 1996 and he received his PhD in electronic devices at the University of Trenton. Since 2003 he has joined the Molecular Biophysics Group in the National Enterprise for Nanoscience and Nanotechnology at Sqaula Normare Superiore at PISA. In 2008 he moved to the Drew Discovery and Development Department at the Italian Institute of Technology where he is working with you today. He is one of the founders of the computational modeling for nanoscale and biophysical systems concept lab. And he is also co-founder of Bicchi Technologies which is a spin-off company of the Italian Institute of Technology. And Bicchi Technologies is providing biotech and pharma companies with high-tech cutting-edge solutions, mainly based on molecular dynamics and other advanced computational tools. And you will be of great interest I believe for you to learn more about the tools and the methods that have been developed by Bicchi Technologies. So now I will change the presentation to Walter. Hi Walter. Hi. Okay. Yes. Now it's good. Yeah. Sounds good. Okay. I'd like to thank everybody that is attending this seminar and you're awesome for the kind introduction. I will tell you about some approaches we have been developed in my group at IIT. Most of them are aimed at most proficient use of molecular dynamics for drug discovery applications. And the underlying philosophy which has been developed based also on our interaction with computational chemistry groups in the pharma and biotech industry is to try to find quick but not too dirty solutions, enabling people to find interesting information even without the claim of being extremely accurate, but within times which are compatible with the drug discovery pipeline. So I will basically mention three approaches. One is the usage of ScaleMD as a simple approach for residence time prioritization of congenital ligands. The second one is the pocketron is a tool for the analysis of pockets along molecular dynamics trajectories. And the third one is the binding, which is a method to accelerate the protein ligand binding process in a kind of a dynamical docking fashion. So it has been recently described that while binding affinity has been always thought to be the most important feature to describe and to use a compound which is a candidate drug. In some cases due to some fail of higher affinity in vitro binders to the design target, it has been conjectured that other factors might have been extremely relevant also. And there are several works mainly of Robert Copeland and David Sweeney where they state that actually residence time can be a more informative and more predictive quantity rather than binding affinity. This might be due to the fact that in some cases the binding occur in situations which are far from equilibrium either because for example there are clearance phenomena in the near vicinity of the target or because of other situations that bring the systems out of equilibrium in such a way that if you have two, let's imagine to have two different molecules with similar affinity, but one with a faster rate of binding and unbinding association and dissociation. And while the other has a slower association and dissociation times, the second one should be preferred because once it enters, it stays for longer time. So it doesn't, there are no repercussions from potential phenomena occurring in the outside the binding site, let's say. So practical experimental determination of KOF is it can be expensive and time consuming, not just because you have to synthesize the molecules and you have to functionalize your target and you have several possible experimental means to try to estimate the association, to see some time of your molecules, but also the experimental determination can have some drawbacks in any case, it's costly, so it would be really useful if we could have a way, a computational way to provide this estimate. And molecular dynamics can be useful in this respect, at least in principle. In the classical molecular dynamics, the level theory used to describe the system is the molecular mechanics level. And the problem with this kind of description is that if you are considering a real good candidate for as a drug, its residence time can be on the order, also on the order of seconds, minutes, even hours in some cases. So there is no hope to describe this phenomenon with brute force molecular dynamics. Then several methods have been devised in order to accelerate this phenomenon, which is a rare event per se. Among them I'd like to mention Markov state modeling, free energy based methods, transition of percent in mice toning, metadynamics and so on. We suggest a much simpler minded approach based on the scale MD tool I will approach, I will describe you in a while. So concerning for example methods which would like to derive the dissociation rate from the free energy profile, several problems arise due to the fact that in order to identify your free energy profile, which is needed in order to estimate the barrier, you need to identify which could be a reasonable reaction coordinate for your binding. And this might be not obvious, especially if your target is not extremely well known. Then in some of the on the more realistic assumption that are done in this case only ideal first order reactions are considered, why in reality the profile can be much less ideal. And then one thing that it would be important to stress is that a very minor error on the estimation of the free energy can result in a much larger error on the estimate of the kinetic constant. So what we suggest is very very as I was telling you in this search for quick but hopefully not too dirty solutions is based on the smoothing of the potential induced by the scale molecular dynamics approach. In the scale molecular dynamics you basically put a scaling factor in front of your of the potential energy of your system. And in this case you reduce linearly all the barriers in the potential energy landscape, not directly in the free energy unfortunately. But at least you definitely you reduce the barrier even if in the case of free energy. This is not a linear correspondence. So, if we can assume that the binding in case of a good molecule of a good candidate drug is corresponds in free energy in a very deep well. We can pay the price of losing detail information on the on on the details of the interaction that the molecule is having with the binding partner and provide that we have an estimate of the depth of this of this well. And in order to do this, if we scale the the interaction of in your system, all the interaction on your system, you will expect that your league and eventually will come out of the of the system because you're also decreasing the interaction between the league and and the binding site. Of course, you might might say that this is the biggest assumption in this case is probably that the interaction between the league and and the protein is the weakest chain, the weakest ring in the chain. And so the one which breaks first, which may not be the case. So it might be that since you are decreasing the potential energy for all the interactions within your system. Some other thing will some some other bond will break some that for example secondary structure in some point may may get lost and even unfolding may occur. So in order to avoid for this for this phenomenon to occur, we put a slight constraint on the on all the backbone heavy atoms of the protein except for those which are in the vicinity of the binding site. So basically we want to strengthen the the structural aspects which we think are not are not involved in the unbinding phenomenon and we why we would like to avoid any kind of bias in the interaction between the league and the binding site. So the I mean, of course, this is this is a limitation of the approach, but the nice thing of it is that we don't need to have any kind of idea or assumption concerning the reaction coordinate. So we will add everything which is related to the interaction in the near vicinity of the league and and the binding site free and and then we expect that the league and which which is linked to the binding site in the weakest way will come up first. So the parameters in the of this approach are the scaling factor and where to put the restraints is a seem very simple and intuitive approach. And one can decide, of course, the stronger I mean the the stronger is the scaling in the sense that the smaller is the parameter you put the number you put in front of your potential energy. The faster you expect to observe unbinding, but also the the the largest loss of detail you will have is like zooming out from your your system. So you find you need to find a trade off between this thing and you decide this trade off based on the accuracy you wanted to achieve and the computational resources you you have in order to to have this kind of figure of merit. We we perform several replicas and replicas of this where all of them starting from the bound pose and then we take an indication of when the ligand is out or interrupted or the interaction with the with the binding site. And then we average this time along the these replicas and the nice thing of it is that these replicas are completely independent, so the approach is trivially parallel. The connection between the simulated unbinding time and the real unbinding time is is shown in this slide where you can see that actually your scaling parameters acts on the enthalpy part of the of the energy. But since we usually in drug discovery consider for this kind of comparisons con generic ligands, we may expect that the difference in the unbinding entropy isn't shouldn't be large. So sorry, not large different in from one leg and to another. So when you consider the ratio of the unbinding times both in the simulated case in the real case, you should expect to be able to neglect this the entropic part in this in this case and so to have a quite simple relationship between the ratio of the simulated unbinding times. We respect to the real money times and we do this in our first aim was to guess the right ranking by ranking. I mean, I would like to identify first the ligand which has the longest residence time and then ordering with respect to residence sign in many cases. It's actually important just to separate the long lived interaction from the from the other ones. So even the exact ranking is not necessary many in many cases. So let's see how it works. So as I told you this is a very simple minded approach. So we wanted to see whether but also pretty easy to implement. So we wanted to see whether it was working or not at that time we which was roughly four years ago. We took the residence time data that were available in the literature and so we took some cases for the adenosine receptor binding to try as in from the series developed in the Aptaris company. The HSP90 other ligands for the HSP90 target. A few ligands also for the GRP-78 which is involved in high to immune neurodegenerative metabolic disease. And finally we started through the company a collaboration with some pharma companies which had real case where they have a lot of experimental data. They wanted to challenge our approach. One of them was with the Servier company which is based in Paris and they tested the validated the approach with the glucokinase target and you will see the result in a while. Then within another collaboration we are also testing the approach with fragments on top of ligands. So these are the results. Generally speaking the results are pretty good in the sense that we are always able to separate the slow binder from the fast and binders. And even more if we take one of them as a reference we make a regressive model based on the Arrhenius formula I showed you before. Arrhenius like formula I showed you before. We also get a pretty good regressive model that could be used to predict the residence time or at least the positioning within this graph of new congenital compounds. So this is the case of the pyrazol inhibitors of the HSP90. And this is the case of purine analogs of binding to GRP-78. And this is the case of the triazine binding to the adenosine receptor. The results are mentioned in this scientific reports paper. This was the work done in collaboration with the Servier pharma company and HSP-parabichel in Jmed Cam again in 2016. As you can see in some cases the correlation is better. In this last case there was I would say a larger challenge because the ligands had different scaffolds. So we are less consistent with the assumption that the ligands are really congeneric. But still the results are pretty interesting. And this is the paper. Okay so in summary for the first approach I wanted to show you. We can say that scale and dissimulation seems to be able to provide an estimate of relative residence times. This data can be obtained in a reasonable amount of time with reasonable computational resources. In a way that can be compatible with heat to lead and lead optimization drug discovery phases. So in many cases that we have experienced of the current ranking was recovered. In some cases also a regressive model was predictive for the new compounds. I would like to stress that so what we are actually simulating here is the residence time. The strength if you want of the interaction of the initial binding post. That means that we need to start from a good approximation of the actual binding post between the target and the considered ligand. Otherwise you might get a faster exit time due to the fact that the ligand was not put in the right position and was not making the right interactions with the target. Okay now let's switch to the second approach which is pocket run. We started from the need to have a way to analyze in almost automated fashion long trajectories from molecular dynamics. We wanted to have an idea of what was occurring on the surface of the protein without having in mind a specific site but having something that was describing the dynamics of the surface entirely on the entire surface of the protein. There are of course very many different approaches to identify statically pockets on the surface of the protein. The list here is just an example I have no claim to have a complete review here of that. Some of them are based on Voronoi diagrams. Some of them use grids. Some of them are also based on the molecular surface and probes. Other approaches also consider structure ensembles and then the trajectories and in some case they have they need a preliminary structure for alignment. In other case they identify pockets based on the atoms which form the pocket itself. And then they can make several kinds of analysis either of the shape or physical chemical parameters, evolutionary parameters. There are plenty of different analyses that can be done in order to identify for example whether a pocket is legable or not. In our case we had some requirements in mind. We wanted to have an analysis which was intrinsically dynamical so taking into account the chain of events occurring during the molecular dynamic trajectory without needing to focus on one specific pocket but having a description of the wall surface. We wanted to have a fast and possibly parallelizable approach and in the end we would like to have some way, some graphical way to have a synthetic interpretation of the results. So the solution that we decided to apply for those requirements was to have a pocket definition based on the molecular surface concept. I will tell you in a while why we did this. We didn't want to need to rely on a preliminary alignment but we wanted to have an atom based identification of the pocket. We need of course to be able to connect the configuration what we observed in terms of pocket at one time with what we observed in another time. So we needed to do pocket matching between different frames of the trajectory. We decided to use the nano-shaper tool that we developed a few years ago because it is parallelizable and he's a very robust builder for molecular surface. In the end we decided to provide some useful graphical representation of volume and surface area a long time. So this is the definition of the molecular surface. You should imagine at least the one according to the linear richer definition. You make a probe, your solvent probe rolling over your, your Van der Waal system and then you fill up all the spaces where the probe cannot enter and then you have a surface. So the nano-shaper was born in order to be coupled with some Poisson Boltzmann solver like we did for the Delphi Poisson Boltzmann solver in order to be able to calculate the molecular surface. At the same time it identifies cavities, also pockets, I will tell you in a while how it does and it calculates volume, surface area and the atoms that face into the cavity itself. And the way it does and the reason why it is quite robust is that it put the system into a grid and it casts rays along the grid lines and it has an analytical description of the patches that can expose the surface. So it calculates with a very high accuracy the intersection between these rays and the patches. And so it calculated the positions where the surface is located and then it can triangulate and do all the functionalities, the formal functionalities that I described to you. So nano-shaper is freely downloadable from our website. So now for the time being we have a tool that builds the molecular surface. But then just to add that nano-shaper has been recently integrated with the VMD tool as a further option to build molecular surfaces. Okay, this is just an example of the computational performance in terms of surface building and cavity calculations on a pretty large system. The nano-shaper tool is described in the plus one paper. And then let's go to the pocket. So if you now have a tool which is very practical and reliable to build the molecular surface, you can imagine applying this twice by changing the probe radius. Once you have the small probe radius, so your surface will have more invaginations. And then you calculate the molecular surface, which is a larger probe radius, so that you have a smoother version of the surface. And then you make the volumetric difference between the volume enclosed within these two surfaces. And what you get at the end is the one which is described here in black, which is a surface pocket, basically. Then, okay, so this was the first part, static identification of pockets. And then, as I told you, you need a way to connect pockets that you observed at a given time with pockets that you observed at a further time in a molecular dynamics simulation. We do this with the Jacquard index, so we observe the atoms that were composing all the pockets at the previous time. And then we observe the atoms that are composing pockets at the current time. And then we look for pockets we share the maximum amount of atoms, and these are the same pockets just evolved a long time. In case there was no match before, it means that a new pocket was created. In case some atoms are no longer exposed to the solvent, it means that the pocket closed. So we perform all these kinds of data collections. And then we observe that there are quite often, I would say very often, a phenomenon that we define as merge and split in the sense that you can have two nearby pockets that at some point separate, in the sense that some of the atoms go in one other pocket, some of the atoms stay in the original one. And similarly they merge together in a larger one. And so we collect also these events of merge and split. This is an example of the analysis that we can perform. So say you have a pocket which are indicated in blue here. This is the PMP enzyme, Trimerica enzyme case. You can identify the largest pockets and all the more persistent a long time. You can monitor the volume along the simulation. And you can, as I was telling you, you can collect the number of merge and split events. You can also ask yourself whether this merge and split have a physical meaning or some consequence. So by doing this, we represent this data into graphs that somehow tell us which amount of crosstalking is present between these pockets. By crosstalking I mean exactly the fact that these pockets share some atoms. Sometimes they are in one, sometimes the same atom belong to another pocket. And this is actually the basis for the pocketron approach. So as you can see here, in the right we applied this representation and this analysis to the Abelson-Keynes system for which there is a lot of knowledge available. So we are representing here only the pockets which have a larger volume and a quite significant persistence along the molecular dynamic simulations. We represent pockets with the bowl. The size of the bowl is proportional somehow to the volume of the pocket. The color is coding for the persistency and the edges between the pockets, the width of the edges is related to the amount of crosstalk between them. So in this case we were actually quite surprised. I'll find out a correlation between the crosstalk network that we identified with our approach and the allosteric connection between pockets in this system. So in the left two systems we are observing, we have the non-mutated disoform of the protein. Up is without the allosteric binder, down is with the meristate allosteric binder. The pocket correspond, I don't know if you can see the cursor. So the pocket where the meristate is supposed to bind is this one, down left part. In the right column you can observe the mutated disoform where it is mutated in the 315 position, again without and with the meristate. So what we were able to observe is that in the case of mutated protein without the meristate bound, the crosstalk network coming from our analysis was interrupted, while in the case where the meristate was bound, the crosstalk network was active. The two pockets were connected. So this was a nice indication that possibly there is some fingerprint of allosteric connection passing also over the protein surface. And this is better described in the ACS Central Science paper that you can see on the left. We also tested what was going on by estimating the binding affinity of the dosatinib inhibitor in the orthosteric site in the presence and absence of the meristate binder. So in the end, the pocket on approach seems to be able to provide an efficient description of the main geometric feature of the pockets emerging in a molecular dynamic trajectory. This analysis tells us that there is a highly dynamical environment and we were able to observe a lot of merchant split events and we tried to see whether this event can have some meaning. And we tried to correlate this, the crosstalk network to allosteric link on the Abelson-Keynes system and we are performing now the same analysis on different systems to see whether this feature is also observed in other systems. In any case, this tool is able to perform an automated analysis and description of the pockets along molecular dynamic simulation and can be complementary to other computational protocols which aim at targeting some binding site or potential binding site. And this is the last application that I would like to, the last approach, the method that I would like to show you is the end binding. So the end binding stemmed from the desire to, the need to accelerate the binding between ligand and the protein target. You can do this via, of course via brute force MD. It worked sometimes by means of a very, very, very impressive computational effort such as the one done by the D-show research group thanks to the supercomputer Anton, or you can do for example with GPUs and other architectural, computer architectural instruments. In any case still, if you have a, if you are considering a general case, the time for binding is still pretty demanding as a computational phenomenon, it's a phenomenon to be studied by computational means. And especially if you need to have some kind of statistics out of it, which is necessary in order to get thermodynamic observable estimate. So we want, so therefore enhanced sampling techniques have been devised in order to get statistics from molecular dynamics simulations, but those tools need some collective variable which is supposed to be a good approximation of the reaction coordinates. This is not obvious. If you have a wrong or to degenerate collective variable like the simple distance between the ligand and the binding site, you are not accelerating the concerted binding process. And if you are pushing with your computational analysis sampling method too much your system in order to observe the phenomenon you want to observe, you end up having unphysical phenomena. So we wanted to see whether we could do something different. And we exploited the one force, which is one natural force, which is one of the way how nature actually accelerates the association events. And these are electrostatics, but our electrostatic is not linked to the real electrostatic of the system. We are just adding a bias between the ligand and the binding site. So you need to know where your ligand is, which are the atoms composed in your binding site, and to have some idea, basically, most geometrical concerning the binding site that we use for this shaper. And then you start your simulation attracting the ligand. The shape of the bias is this one is very similar to the traditional electrostatic energy of the system. We use the coefficient in front of it to make the protocol adaptive. By adaptive, I mean that it rescales the strength of the attraction by feeling the total force that the ligand is feeling, the regular force, not the bias one. And also it's adaptive in the sense that when it feels that the transition state is passed, the bias is automatically switched off. So this is an example of how it works. We did this for the Sarkine is interacting with the PP1 inhibitor, for which there is also the counterpart from the DDShore research, the brute force counterpart. In the right panels, in the upper one, you observe the RMST with respect to the crystal. In the lower panel, you observe the strength of the bias, whose shape is not based on the RMST. So we can plot the upper panel just because we know the actual final, the right crystallographic pose, but the bias strength is actually based on the position of the ligand, but there is no information on the crystal. So we start this bias molecular dynamics. So we do this 20 times per pocket entrance. Every replica is 20 nanoseconds long. So again, also this is a trivially parallel approach. You have several trajectories, and among them you prune out all of those for which the bias didn't switch off. And among them, so you already do a down-selection in this way, and among them, for which we observe that in roughly 5% of the replicas, you get a final pose with an RMST below to Armstrong. So among the part of the trajectories for which the bias switched off, we do clustering, and we take the representative of that clustering, and we use ScaleMD. Again, in a similar way as we were doing in the first approach I was showing you, but just one replica with a 5 nanoseconds ScaleMD in order to test the system and to see which pose was more stable than the others. And this provided a performance which was either equal or better than all the other scoring functions that we could observe. So this is basically what we saw. So this is the ranking obtained via the ScaleMD approach, and this is where the system we tested our approach with. A set of cholinesterase with donaphexylengalantamin, several kinases, a few GPCRs, the sar kinase again and the epi, and also a protein peptide system which was RAD-51 PRCA2. As you can see, the best RMST obtained, sorry, the RMST of the best mydoid is in column 3, while the minimum observed RMST is in column 4, column four, but this is telling us that during the molecular dynamics run, you have conformations which are very, very close to the crystal one, but you have basically no way to figure out which they are. So these numbers are interesting but useless in a predictive approach, while the third column actually is what we got from our clustering plus scale and molecular dynamics approach. The computational effort that you can see in the last column is pretty interesting because you have 20, in most of the cases you have 20 independent runs which was lasting 20 nanoseconds. So it is a pretty fast approach, especially with respect to the brute force counterparts which are indicated in red. So this is how you perform the approach. Via nanoshaper you identify the pockets, the user selects the one which is the target site. Then nanoshaper locates the entrances of the binding site, in this case they were three. It clusterized the normal vector to the entrance, eight positions the ligand at a given distance from the protein surface. And then it starts the simulation. One more thing, on top of the final position for the successful path we also considered potential similarities with the unbiased brute force simulation coming from the de-show research. We did this for the GPCR beta-to-alprenol interaction where we could observe that regardless of the initial orientation of the ligand all of them re-oriented before entering the binding site and there was also an opening of the binding site enabling the entrance of the ligands. This occurred on a very different time scale as you can observe by comparing the red and blue figures in the upper row, but from the dynamical point of view they were pretty similar. The second case where we were able to compare our results with the brute force simulation was the SARC MPP-1 systems where we observed that despite the ligand started from different positions, they all passed in spite of the fact that the pocket has a very large entrance. All of them passed through a very little part of the entrance and then we observed afterwards that in that part is where there is a hydrophobic patch that potentially is where the ligand preferred to enter because of their chemical behavior. Finally, the MD binding approach seems to be able to accelerate the protein ligand recognition and also the path seems to be physically sound. This approach can be coupled with more accurate approaches that want to estimate, for example, free energy surface and this can be paired with the previously presented pocketron approach which is a tool for pocket analysis and identification. I would like to thank my collaborators in IIT, Prof. Andrea Cavalli, Dr. Andrea Spittari and Senjo De Kerchi and Dr. Marco De Vio for the pocketron approach and the tools that I have been describing to you today are implemented in the Bickey Life Science software which is sold by Bickey Technology and I would like to thank you very much for your kind attention. Thank you, Volker. Yes, so this was a very interesting presentation of these tools. I'm sure they're really useful for everyone who is doing modeling and simulations. I will encourage everyone to post your questions in the... Can we have the last slide, Volker? Please. Yeah, thanks. In the questions tab on the GoToWebinar control panel. I have one question. I was wondering about ScaleMD. What is the performance of the software? How much overhead does it include? The performance of a ScaleMD simulation is exactly the same as plain molecular dynamic simulation because you are just running a plain molecular dynamic simulation in a different ensemble which is with a fraction in front of the potential energy. From the point of view of performance, this is not affecting the performance in any way. That's good. There is a question by Adam, actually. Can you hear us? Yes, I can hear you. You mentioned towards the end that the protocol can benefit from complementary tools like Pocotron. Of the various pieces of software and the various techniques that you've used, I wonder if you could comment on how they can be used together as part of a wider workflow. Have you thought about how these tools and things can be used with other pieces of software? Yeah. Actually, while personally I am more into the algorithmic development, once the algorithms are implemented and made available to the users via the Biki software, they also have been tested and coupled with other approaches, like, for example, in one case, the cluster analysis concerning Pocots was run over some targets and the results were used in order to improve the virtual screening protocol which was used with the classical molecular docking tool. In this case, it was done and the results were improved, of course, because the conformations of the Pocots that we observed were more prone to identification of good ligands. Thank you. You're welcome. Thank you. So, we have a question by Olivier. So, let's see if we can hear Olivier. Olivier, can you hear us? Can you say something? Okay. Maybe we don't have a good audio connection. So, I'm going to read his question. Olivier is wondering whether how scaled MD implements Gromax and in what way could the parameters for the potentials could be changed? So, we did a custom modification of the Gromax software where we simply added the possibility to put the scaling factor. So, the scaling factor is put in front of the potential energy of the system. There is no other force field parameter which needs to be modified. So, apart from the fact that it needed to modify the Gromax software, all the rest is trivial. And this patched version, is it available on your website? Yeah. You can send me an email or I can and yeah, since it was a modification on Gromax which is GPL, it is also a GPL. Yes. And do you have support for other MD engines? At the moment, no, but people at Biki, they are working on it. The point is that, I mean, every software has its own implementation and its own way to, you need to figure out where to put your hands in order to add external forces. But there is a lot of work in progress in this direction. Yeah. So, I encourage everybody to follow up with the development on Biki's website if there are other backend engines. Also, scale MD, how does it compare with other approaches in terms of quickly exploring the surface? How efficient is it? You mean the molecular surface or scale MD? Scale MD. So, which surface? You mean the energy surface? No, we, as I was mentioning at the beginning, this was meant to be a quick, hopefully not dirty solution to the problem. So, we don't have any claim that we are able to use this to explore the free energy surface. So, the basic thing is if the bound state correspond to a very deep funnel, then by means of, then since when you do scaling, you are losing details, of course, on the interaction because you are also scaling the details, their cells. But the point is that since this well is pretty deep, you should be able, nevertheless, to be able to rank different ligands by reducing this well. Unfortunately, this lost of detail hindered a more refined usage which would be needed in order to estimate free energy. So, we don't estimate free energy. I see. Okay. Thank you. And we're already past the hour. So, I suggest we stop here. And I want to thank you again, Walter. Could you please show the last slide actually after this one? Yeah. So, for our audience, I want to let you know that we are planning on 10th of May, another webinar in our series that you're welcome to register and hear presentation by Andrew Proudfoud from Novartis. And, yes, and with this, I'd like to finish and thank Walter for the great presentation. And to our listeners, please visit the Bikki Technologies website to find more about the different tools that they have and to follow up with the development. Thank you, Walter. Thanks, everyone. Bye. Bye.