 Hvej, daj. When I'm Atilio Bargioz, from the University of Cagli in Italy. And today I'm gonna speak about how to use an asset sampling methods to improve the predictive power and accuracy of molecular docking. And so describe in silico-molecular recognition events. See here is a list of a few references, I suggest you take a look at if you want to understand which are the major challenges in k flesh and the current developments in molecular docking that screening and related approaches. Outline of my talk is as follows, so I will briefly introduce molecular recondition and the curious that have been put forward in these years to explain and rationalize molecular recondition events. Then I will focus on molecular docking, which is one of the most widely used techniques uspežiti in s вдругim molekularne zovljene in odnjetno izgovori, začela, da pa vznikam, da je vednočno naselje subscribe in vsega naša Zelo skupaj sa konfektivnem vzaj tudi, hrojimo obkona v sej, ta nekaj so nal Huawei, discussori pečnosti, ki se bo vendem kaj je tko, zelo najšliša nrina, neko se neko se. Torkir bom svom neko do prejpivneho necki, ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Tako, da bomo tukaj proten, da je prist, da bomo tukaj prist v nekaj ligi, in včešli, da bomo tukaj vse, da je tukaj vse, da je tukaj vse, od tukaj protenem, ali nekaj ligi se prišli stabilizirati z protenem, ali tukaj ligi se prišli, da je tukaj proten. Tukaj tukaj ligi se prišli, da je tukaj proten, tega oddajje naj potrebena učina po glaslju. Molikola rekonicija je se odlično časteljno možnimi vzivni meteorokriž Adam. Tako je vzivno in začnila z trialih ga, nebo je je našao medozivne, patogenne procesjega. Tako, tudi,ço svoje dv increasingly mojkulosti na natom呀štavnjih ljudu nič ne pevno začine. To čistuje nekoma začine, malo da je laidozening. Svala,j da vedes, nek Health, drug in protein porastovanja zapravi s mojkulosti pa ide lebo razdužel lepo me drugi po drugi prejzat. in hodnje aktivite na traži z vem protem informacijom. V 3 gične polega je sredn Circle upočila, pri ezima in narednem tudi v franjvah protem in interacije, zato sem da vidim, da vse koncept i v tem polar ugače je prišljeno na našom zveranjem protem, protem, nekaj nuklenje, nekaj in so. Tako teori so v lokaj tihlje, tihlje, pukaj tihlje, in tihlje vselečnje informacije. Tako teori v lokaj tihlje je zernost, však in v vsoj všeši in z tihlje vsoj vselečnje, in katero tihlje legan in protem vselečnje kreženje, boj, ki je nekaj konformationalne odvijeli. Zato vse je bilo vznik, da tudi teoriči, zelo, da je vsega komplementaritva vsega legana in protena. Vsega legana ima preformačno in tako, da legana je izgleda protena, ki je konformacija načinja del del del legana. Tako da, troče všeč, če je vzelo, da se začeli, da je vzelo, da so smo vse načinili. Protočnji povrza se za to, da je začeli, da je zelo, da je to začela. Če je zelo, da je začela, da je začela, da je začela. Tako, da je začela, da je začela, da je začela, da je začela, da je začela. can bind to the keyhole, that is, the binding site in the protein, that is the log. And another feature of the locking key model is that the binding is already associated to a stabilisation of the protein configuration associated to the binding. Svetljno, bočko ispljanočno model ne možete tudi izpronovati, da pa ne več vse obrženje obržene in ne božete obrženje, ko je z vsej vsej vsej obržene in ne možete pošličiti. Na primer model, ki se pričasno bil, je zvoril, da se je neko vsej delalizala drugi model, v koslanu na 1960. V tom modelu je tudi vsej, da počasno, je tukaj učin, zelo učin vrtičen, da počeši druga orientacija, in vzuprščaj v zelo, da legači tečne odpravljene, vzupršče začin, vsak. Tze odpravljene, vzupršče začin in vzupršče začin, zelo, da je vzuprščak začin, i ta, zelo. zelo da se vzelo, da vzelo v konečnih modučov, če ne ne bude način konečna dela konečna dela konečna dela. Stoji se dogodila, da postojeva 1 konečna dela konečna na vse vse očine. Konečno dela konečna dela konečna dela da je vse qrana v izgledenem s interakcijami s delom. Zato, da je vse bolje dela, konečno dela konečna dela konečna dela, tako da je nekaj staj, ki je staj B, in taj staj je stabilizati, tudi v reduzerti modem, z tajem stajem A, ki je priprefereno v nekaj staj. Problem v reduzerti teori, je, da je tudi početno početno v medijom in konformacijanju začeljenju lokalizovati v reduzerti interfejze. Tako je nekaj teori, as being also developed that aims to explain also binding events that involve large scale conformational changes in the protein or in both partners. This theory is called a conformational selection theory or selective binding theory and derives from the free energy landscape theory of the proteins. So briefly, one of the main concepts within this theory is that proteins are inherently dynamic also in the native state at equilibrium. So they oscillate around the native state, but they can also assume conformations that correspond to different states than the one that is most likely. And we have then an ensemble of states with different population distributions. And basically the important thing in the conformational selection theory is that among the states that are sampled by the protein in the absence of any ligand, there is also the state that correspond to the conformation of the bound protein. So the conformation is assumed by the protein in the complex. So let's call these two states for instance A and B and suppose that A is the state associated with the bound protein and B is the state associated with the bound protein. So the ligand will bind to B even if B is not the preferred conformation in the absence of high concentration of ligands. Ok, so you can see this in the free energy sketch on the left side of the slide. B is higher in the free energy landscape than A. And then the binding will cause a population shift towards the bound receptor structure. And this in turn will cause a redistribution that of course will obey the laws of statistical mechanics and the underlying free energy landscape will be reshaped as a consequence of the increase in the percentage of bound-like structures in the pool of protein conformations. Now it has been verified experimentally that all the distinct conceptual models that we discussed exist. And in particular they can coexist in some molecular recognition events. So there has been a huge effort to unify the molecular recognition description using both models. And one way to do this could be the following. Suppose that the first binding between a ligand and a protein is a non-specific non-optimal binding. So in order to be so, it will not induce large conformational changes in the patterns. So we can say that approximately this can be a kind of lock and key binding event. And this can be dominated by the solvent gain due to the removal of water molecules and interface between the protein and the ligand. But then once the interface has been formed, the side chains and also the functional groups in the ligand can change the conformation. They can rearrange in order to optimize the contacts. And then this means that we are seeing an induced fit event. Then the formation of the complex will reshape the free energy profile and will alter the free energy battles separating the different states and also the relative stability of those states. So forcing the increase of population, the increase in the population of states associated with binding. So the bound state will become more probable. And then we have a conformational distribution, which is actually a phenomenon that we just discussed in the conformational selection theory. Ok, now that we discussed the theoretical framework of molecular recognition, let's start to see how molecular recognition can be mimicked in a computer. And as I said, molecular docking is a tool that is used over the world, in all labs in the world to investigate molecular recognition in silica. It's probably the most used methodology to investigate these kinds of events in silica. Molecular docking relies on a description of the submodels of the ligand and the protein that means that we should have the structures of both partners and also a description of their physical chemical properties in order to describe the interactions they can form to each other. So, suppose we have a ligand and a protein, we call also the protein receptor. And then the next step is to identify a putative binding site on the receptor. And once we have identified a binding site that can be a region of the protein, but also the world protein, then we try to dock the ligand into the binding site in order to generate the so-called docking poses that are putative structures of the stable complex formed by the ligand and protein. Each pose will be associated with a docking score that ultimately should mimic possibly the free energy of binding of each different pose. And of course, we expect that a reliable docking algorithm will be able one to find real structures of the complex, and not only this, but also to rank them on top. So, even if we find a real structure of a complex, this structure is ranked as 100 or 150, then probably we will lose this structure while trying to refine the docking pose. Now, you can imagine how many degrees of freedom are involved in a typical docking round. So, we have to explore all the rototranslational degrees of freedom associated to mutual orientation between the ligand and the protein, and more than that, we have to take into account for the flexibility of both partners, in particular for the flexibility of the protein that has generally a very large number of atoms. So, you understand that it is impossible to exhaustively explore in a systematic way the phase space associated to the formation of complex. And then, in order to address this issue, several algorithms of increasing complexity and computational cost have been developed over the years, and there are really many that can be classified in many ways. Here, I will use a recent classification by Antunes and coworkers, which classify metals according to the way they treat flexibility, either partial or fully, or implicit versus explicit treatment of flexibility. And we have soft docking approaches, as you can see here, in which basically you reduce the van der Waal radi of the atoms in the binding site of the protein. You have selected docking approaches in which you allow selected site chains on the protein to change conformations in docking. Then we have ensemble docking algorithms in which we can use multiple conformations of the protein in order to account implicitly for flexibility. And on the fly docking in which you change both the conformations of the ligand and of the protein during the docking line. In the last years, ensemble docking has gained attention and actually is one of the most used tools today to account for receptor flexibility in docking. In the simplest implementation of ensemble docking, one generates different conformations of the protein and these different conformations are used to account for different shapes and volumes of the putative binding site, as you can see in the picture on the right side of the slide. Now, it is desirable to obtain conformations that are different from those present in the PDB database. Why? Today we have several thousands of structures in the database and the rate of deposition is constantly increasing, but still there is a problem because first, the rigid targets are overrepresented because of the historical prevalence of X-ray methods to derive protein conformations. Second, compared to the drugable genome, still the number of structures in the PDB database is low. And we have a kind of redundancy in the, concerning the chemotypes of ligands because it happens often that we have structures of proteins with ligands that belong to very similar families. And so if we have to sej for a different chemotype that can have an action on that protein, we are biased in some way. So, deriving, obtaining conformations from computer simulations is desirable, but it's a challenge. Why? Because as you can see from this graph, the extensive sampling of the conformations space in simulations is very expensive. In particular, concerning the processes that are associated generally with ligand binding that are, for instance, folding events of some stretches of the protein or binding side chains, reorientation particular side chains that are buried into the protein, these processes take microseconds to seconds to occur. And so you can understand that in simulations these processes can be hard to see. So, in other words, if you have a conformation of your protein that represents the unbound state for that particular ligand or the pool of ligands you want to, to you are interested in, if the free energy barrier associated with the transition from this conformation to the other conformation that is the bound conformation is very high, then you have problems in sampling this conformation. Just to give you an example, supposed to have a barrier of 10 kK per mole, and this is associated to, let's say, typical events that can occur on the time scale of a few milliseconds, supposing that you are able to perform simulations at the speed of one microsecond per day, which is actually state of the art and even a lot depending on the size of your system, well, you can hope to see events by simulating this system for at least three years. So you can understand that this is not feasible. And unfortunately, as I said, these processes are important either concerning the local arrangements, as you can see in this example in which you see attius in the change conformation in order to accommodate the ligand, that is this one that you can see in the binding, or large scale conformational changes, such as domain hinge bending or shading motions unfolding of protein segments that are in contact with the ligand, or refolding also, and so on. And as I said, neglecting these processes often leads to poor docking results. So how to cope with this? In the last decarage, a lot of enhanced sampling methods have been developed that aim to improve the conformational sampling, and in particular to improve the conformational sampling of states, other than those associated with unbound conformations of a protein, for what concept teams we are interested in. There are a lot of classes of methods, and I just put this slide for your reference, and today we are going to see applications with three different methods that are metanenamics, accelerated molecular dynamics, and the t-concored approach. Let's start with the t-concored approach that was developed in the lab of bed growth, and these are the key references you can have a look at if you want to understand deeply the myth. So basically the myth of the holds basically is based on a previous myth that is called a concord, in which the interactions present in protein structures are translated into a set of geometrical constraints that can be compared to the building plan of a protein. In particular, the authors identified two main classes of interactions, the topological ones that include covalent bonds, angles, and so on, and non-covalent interactions, such as H bonds, adrophobic interactions, and so on. These interactions are actually represented by distances between different atoms, so basically you have a protein, you calculate a matrix of the distance between all atoms in your protein, and you define upper and lower limits for each distance, and the limits are based on the kind of interactions, so the topological interactions will have generally shorter, lower variance in their distances, and on the distance between the atoms that are involved. So you define upper and lower limits, and you can see, for instance, here a construction plan referring to the H bonds present in the protein, you can see, for instance, in blue the H bonds, formally you can recognize the two trialesis, I guess, and the other colors refer to the backbone side chain and side chain side chain bonds, formally. Once you have defined your restraints, you can use several techniques, the authors use the generation of random structures in particular, to generate coordinates, and then you impose, to those coordinates, the satisfaction of all the restraints that you have defined in your building plan. In this way, you can generate a pool of structures that can be, that should represent oscillations of the protein around the native state. And the next step that was implemented in Tikonkor was to sample conformational transitions between two states, for instance, between the upper state associated with the unbound protein, and the other state associated with the bound protein. A key observation that was used to develop Tikonkor was that the structural transition observed in the structural databases involved always opening of one or more H bonds, and then one could analyze these transitions and predict where the unstable H bonds are by estimating a solvation score on the sites around each H bond. Why? Because it has been shown that the more wet is the site, the more likely is for a water molecule to attack the H bonds formed between atoms of the protein, and then the more stable will be the H bond. So, if we calculate the probability of having a solvation score in a range of values, we can define a stable H bonds as those that have a value of the solvation score above a threshold that we can fix and validate. This then allows to identify the structural hinges in our system. And so, you can see here as red sticks, the H bonds that are predicted to be unstable in the construction plan that we show in the previous slide. So, those will be the points in which we will not put any restraints. So, no, if we don't put any restraints there, the coordinates of those atoms should not satisfy the distance restraints between each other, and then we can hope to see larger conformational changes at those places. Then we can sample the conformational transitions, for instance, using the ascension dynamics. So, analyzing the trajectory and finding the global motions associated with protein flexibility. Here, you see the conformational transitions that are sampled by T-concord as a function of the projection of the trajectory on the first two hidden vector derived from the covalence matrix of the trajectory. Here, you see that we almost sample the conformational transition between the open for a ke, you see on the right side, and a closed conformation of the protein. You can already see an interesting point in this slide that we will discuss in a while, that if you add another experimental information, so the fact that this protein has been crystallized in presence of an inhibitor, you can focus the sampling in a more interesting region. So, indeed, the sampling of upper to lower conformational transitions can be further enhanced by implementation of other kinds of experimental information into set of restraints. In the example here, you see on the left side the open structure of the protein, and you see that there are four distances that are highlighted by red lines, and these four distances are also restraints, and imposing these restraints allows to sample the conformational transition towards the closet hollow conformation that is shown in can here in the middle of the slide. So, the metal ensures a sampling of functional motions in other words from upper to hollow, means you are sampling functional motions of the protein, and you can see that if you use the concord in the presence of an experimental bias, you actually have a much better sampling. You can see this in this slide, in the graph on the lower part of the slide, comparing the red dots, in which you have an unbiased application of the algorithm, and the black dots, in which you use experimental information to focus the sampling in the right direction you are interested in. Now, the seliga and the growth in 2010 applied the T-concord algorithm to sample conformational changes associated to ligand binding in a set of 10 proteins undergoing medium to large conformational changes upon binding. You can see in the last two columns that the RMST calculated either at the back bone or at the main side of the protein span values going from 2 to 7, so a very large conformational change upon binding. And in addition to using T-concord in this work, the authors exploited additional information in the form of bias in the sampling towards values of the radius of generation that are compatible with closed conformation of the protein that are associated to ligand binding. So there is a bias in the generation of coordinates that derives from experimental information. Here is a sketch of the automatic flow that was set up by the authors. Basically, you have an iterative refinement in which you first generate conformations, then you dock the ligands using autodock vina, and after a second refinement, you improve the binding description using Rosetta ligand. So, without further details of this protocol in this picture, shown that 5,000 docking poses obtained for each system, you can see there are 10 systems. As a function of the ligand RMC with respect to the true experimental structure and the score. The lowest of the score, I guess, is the ranking of the pose. And you can see here that in 9 out of 10 cases, the hollow-like receptor conformation was predicted using this methodology. And in 8 out of 10 cases, the native-like pose was identified, you can see that only in the case of Gluko, and in the case of Osmo, native-like pose was not identified, sorry, in the case of Algi, and Gluko were not identified among the top 100 poses. So, overall, the method is suitable to generate apotuolo conformation transitions in a set of different proteins, and to improve them, the description of molecular recognition depends in the same targets. The second case study we will see today is an application of the so-called replicas change molecular dynamic simulations, and also in this case, I list here the key references to understand what we want to speak about. The method works by performing multiple simulations that are also called a replica of the same system at different temperatures. And at the constant time intervals, you basically try to swap the coordinates of two adiacent replicas in order to improve the conformational sampling. You can see this explained very well in this graph that I took from a recent review. And in a typical setup, then what you do is the following. You try, you set the system of interest with the lowest temperature, and that is the temperature of the ensemble you want to sample. Then you set the highest value of the temperature in a way to avoid disrupting the system you are investigating, but at the same time to allow for sampling of easy sampling of free energy values. So, if you do so, you can see here highlighted by black lines, the sampling of replicas among different temperatures. You can see in this graph you have more than 50 replicas, and each of them at a different temperature. And you can see that basically if you are able to exchange the replicas with a good rate, you sample in a continuous way the temperature stays. And then you produce a good mixing of the conformations. Now, just a few words on the way the exchanges are made. Suppose you have a probability, a normal distribution for the probability of each state in x. And then if you choose the temperature in a proper way, and that means simply that you should allow the two systems to talk each other, the two adjacent replicas to talk each other, so that means that replica i should be able to assume conformations that are also assumed by replica i plus 1. Then if you do so, you have an overlap between the distribution of states of the adjacent replicas, and they will overlap. And in this case, you are sure that you can get, you can have an efficient exchange between the adjacent replicas by implementing a metropolis like a criterion for the exchange. And in particular, this is the criterion that one can implement in order to exchange two different replicas. So if you have two states x and x prime, basically you will have an exchange between the two, if the potential associated with x prime, that is the replica lower temperature, is lower than the potential of the replica x. This because the exchange will be automatically energetically convenient. Ok, if this is not the case, then basically you calculate the probability of exchange by comparing a random number with the exponential of the delta factor that is reported in the equation you see in the top side of the slide. So by implementing this criterion, the important thing that you do is that you allow the replicas at high temperature to flow, to swap, to go down to lower temperatures. And these then will allow the replicas that are running at low temperatures to exploit conformations that will have never been exploited in standard molecular dynamic simulations. And another important point about replicas change molecular dynamic simulations is that while the trajectories are of course discontinuous, ok, in temperature, we will achieve Boltzmann sampling because we are implementing detailed balance. Actually this is not necessary because recently it has been shown that the detailed balance is not needed also for achieving Boltzmann sampling. Now, oskultorpe and co-workers used replicas change molecular dynamic simulations in order to investigate binding to three different targets that are HIV-1 protease, sitting bound kinase 2, and androgen receptor. They compare the performance in assembled docking of three different sets of conformations. One set of conformations obtained by collecting the crystal structures available for the three targets. Another pool of conformations generated by performing a relatively short standard molecular dynamic simulation and the third pool of conformations generated by performing replicas change molecular dynamic simulation. And they used conformations generated in this way to investigate the performance, the performance in virtual screening experiments. Here are the details of the studies. Basically you have a lot of structures coming from XA and from standard MD of course you can take how many structures you want and the same is true for replicas change molecular dynamic simulations. Now, you generate a lot of structures but when you go to perform docking using assembled docking basically the ideal setting would be to use the lowest number of possible structures while keeping the description of variations in the shape of the binding site as high as possible. And away the authors implemented this was to calculate the so-called matrix of per wise normalized volume overlaps at the binding site. How do you do this? Basically you have a binding site in a confirmation A and then you can calculate the volume associated with the shape assumed by the binding site in confirmation A. You can do the same for a confirmation B of the same binding site and then you calculate after aligning the two structures you can calculate the volume of the overlap between so the intersection basically between the two binding sites and you can also calculate the sum of the two volumes and then you can define the volume overlap using this formula which basically tells you that if this parameter has a value of one you have a perfect overlap and if this parameter has a value of zero you don't have any overlap. So performing a cluster analysis on this parameter allowed the authors to come out with only four clusters for each system and each structure generation protocol. Then they investigated how before showing you the results from docking in the cross screening I wanted just to show you this picture in which you can appreciate how replicas change molecular dynamic simulation actually as expected increased the conformational sampling at the binding site. In the graphs you see a distance that was chosen by the authors representing oscillations of the binding site and you can see that the red graph that is representing standard MD has much lower oscillations than the distance calculated in replicas change molecular dynamic simulations that are shown in green. However, with respect to x-ray the results are system dependent and in particular in the case of the androgen receptor the x-ray cluster feature the highest shape diversity. In particular you can see in the top row x-ray the c column okay you can see that there is a white law going on the up left side of the picture that was never found either by MD or by replicas change MD simulations. But MD is also able in some cases to be superior to the conformational diversity generated by x-ray stallography and this is the case for instance for the HIV-1 protease in which MD clusters feature the highest shape diversity among all the clusters found for this system. Then once you have the clusters you can perform docking and glass screening and in particular to perform glass screening that means basically take a large pool of lians and try to see which of them binds with high affinity to your target. You can use the so-called data set of using full decoys to provide active and the co-illigance. This is a very famous data set there are other data sets that have been developed over the years because it includes active compounds and also decoys that are not active. And so by docking active and in active compounds you can basically see how your protocol is able to rack the active compounds are on top and then to recognize them as true active compounds for the target. You are interested in. Here are some details of the protocol implemented by hospital and co-workers. In particular they choose as a parameter to evaluate the performance of the first screening the so-called enrichment factor. That is the ratio between the number of active compounds in the top percent of the poses. For instance, a1% is the number of active in the top 1% poses coming out from the virtual skin experiment divided by the number of compounds in the first percent of the data set divided by the total number of active and the total number of compounds. To appreciate why this enrichment factor is important you can compare in the graph on the right side of the slide the ideal curve with 10% active in our compounds database and 50% active. You can see that if you have 10% active compounds in the data set in the ideal setting those 10% active compounds should be coming out as first 10% among all the screening compounds. You can see that an enrichment a typical enrichment curve instead is between the ideal setting and the random trial in which basically you don't have any indication of which are the active compounds with respect to the decoys. Now, the authors in particular use the enrichment factor to evaluate how using an ensemble of compound of sorry, of receptor structures is improving virtual screening. And in order to do this you basically compare the performance of using an ensemble towards the average performance of using each structure composing the ensemble. So X-ray succeeded in all cases. You can see that for instance for the androgen receptor you have 39 active ligands in the ensemble of 1% active compounds and if you repeat the same experiment on the four structures that compose the ensemble and you take the average you have a lower number and the same is true for the enrichment factor and this holds for all the systems as you can see from the numbers reported in this table. RemD succeeded also in all cases but for the androgen receptor and if you remember the androgen receptor was the system in which RemD was not able to generate a shape diversity as good as the X-ray structure. RemD, standard RemD, feature the worst performance overall. So the auto show also in this case that using an anzacid sample technique actually improve the performance of docking and velkva screening. The third case we will see today is the application of the scholar accelerated the molecular dynamics that is an anzacid sample technique implemented in the Mechamon lab and basically we are going to see an application done by Motan and Bonati a few years ago. So in accelerated RemD you have done no negative boost to the potential energy surface but only when the system is same potential energy that are lower than threshold E. In this way basically you only modify the potential when you are under the threshold and you can if you tailor properly your parameters you can realize a very continuous curve of the so-called modified potential that you see in the in the equation is equal to the original potential above the threshold and to the potential plus the boost below the threshold and this will allow the system to escape more easily from potential basins in the potential energy profile. How does this translate into improved conformational sampling? Well if you have a shallower minima you will be able to accelerate the rate of escape from this minima by a factor that in this case tools now to be equal to the exponential of the boosting potential and then the estimated the simulated time that you are going to to reach in your simulation will be increased by a boost factor that is the average of this exponential. Two two flavors of accelerated MD were implemented providing two different acceleration levels. In the so-called single boost or lateral boost approach basically you apply a potential a boosting potential only to the diagonal angles with the formulas that you see in the slide while in the double boost or dual boost version you apply both potential to the diagonal angles and another potential to or the atoms of your system. You see that in the formulas there are averages in order to define the energy threshold above which you don't apply any boosting potential and these averages are calculated from short unbiased molecular dynamic simulations of the system moving test. Motem mnati applied accelerated MD to generate conformational transitions that will improve the binding of allows to the allows binding protein. As you can see from the picture there is a very large conformational change under seen in this protein upon binding of the allows and the authors perform at micro-synch along molecular dynamic simulations comparing the classical standard conventional decoledit conventional MD to single and double boost accelerated MD. They perform a volumetric cluster analysis of the conformation generated using the volume of the many side of course and then they perform a docking. So before before analyzing the docking results let's see if and how accelerated MD was able to improve the conformational assembly of this system. You can see here the comparison between conventional MD and single boost MD either in terms of distance in terms of RST from the all confirmation of the protein and in terms of conformations associated to pseudo angle and pseudo the angle that are shown on the right side of the slide and in particular how the pseudo free energy basing projected on these two coordinates was expanded in the single boost accelerated MD. Although as you can see the all alike conformation was not reached by performing even a microsecond long molecular dynamic simulations. So the authors also used double boost sorry extended the simulation by half microsecond and in this case they were able to sample all alike conformations of the protein and the same was achieved with the double boost accelerated MD. This improved conformational sampling of the protein translated in improved docking. Indeed you can see here that by performing a cluster analysis and choosing the number of cluster to to be 10 the second cluster featuring NRMST out of any site lower than 2 angstrom was able to generate by mean poses featuring NRMST of the ligand lower than 2 angstrom. So also in this case announced sampling techniques were used successfully to generate conformations that are similar to the bound conformational proteins and this translated in improved accuracy of docking protocols. Okay, well, this is the last slide of my first part of the talk and I will see you in a while for the second part.