 So, welcome everybody. Today we have the Biaxel webinar number 67, and with us there is Joe Gao Texiera from the University of Utrecht. He will speak about introducing ADOC-3, Enabling Modular Integrative Modeling Pipelines. I will host this webinar. I'm Alessandra Villa from the Royal Institute of Technology, and with me is Stefan Farr from the University of Edinburgh. Joe is a structural biology, and he has experience in multi-domain protein dynamics using paramagnetic NMR, Integrative Structural Biology Method. He did his PhD in CERME in Italy, and then he moved first to Barcelona and then in Toronto, where he moved also, not only physically, but he moved also from the web lab to more a dry lab, following his passion for software development and software architecture engineering. In July 2021, he joined Alexander Boven lab and he started to work to develop ADOC-3. In addition, he's brought about Python programming, best practice in open source software development, and he contributed to different types of open source projects. For example, PDB tools. And now he will speak about ADOC-3. So, please. Thank you very much, Alessandra, for your kind introduction and welcome everyone to this biaxial webinar. So today we will talk about ADOC-3. As you know, biaxial is composed of several projects for integrative modeling and computation, and today we are working about ADOC-3. So, first of all, first thing I want to thank is to everyone involved in the project and in the group of Alexander Boven group, and the present and past people that has worked together with me. Special thanks to Marco and Rodrigo, who are also participating in the development of ADOC-3. So big thanks to everyone in the group. And before going to ADOC-3, I'm going to make a quick overlook of ADOC-2. I mean, most likely you all know about ADOC-2. So it is the most important software to model biophysical interactions between micro molecules using experimental data. So the idea is that to start with individual molecules with proteins or protein DNA, we will use a physics force field and experimental data to try to arrange a model of this interaction that best fits the experimental data. So that's what ADOC does. So as described before, in other webinars, ADOC can actually integrate many different experimental restraints in order to obtain the best modeling. I not want to go further with ADOC-2 because there is ADOC-3 around for already 10, 15 years in production. So I invite you to go to our group websites and to the BioXL YouTube channel and look for presentations where my colleagues and my supervisor describe in detail the ADOC-2 protocol. Feel free to take a screenshot here on the slide. It's also in the record so you can take the links. And now we are going to talk about ADOC-3 and I want to talk with many other differences between ADOC-3 and ADOC-2. So the main concept of, so before going into detail, I want to tell you that the ADOC-3 code is openly available on GitHub. We have a public repository where all the code discussions, issues, poor requests that you are having are there. You can follow and trace the development of the project, which is also very nice for everyone joining now in the development of open source software. It's a very nice place for you to have the feeling of how it is to develop a software team with a team of members. So I'm really proud to say about this and it's definitely a source of good and open development practice. So what is the main scope of ADOC-3 regarding ADOC-2? So the main scope of ADOC-3 is to have modularity, modularity of the workflow. Let's discuss this. So for example on the ADOC-2, ADOC-2 is characterized by three main steps. So in the beginning, you will input your two molecules and all the experimental data that you want and at the end you will get your interaction model. And this pipeline could be parameterized by many different parameters, but it's with a rigid pipeline. If right now you don't know exactly what I'm talking about ADOC-2, just bear with this in mind. It was a rigid pipeline, parameterized pipeline, but it was a rigid. You have much more information on the links I showed you before. But now what we want with ADOC-3 is actually to split this pipeline into many different individual pieces, what we call independent modules. Because what we want to do is we want to be able to combine these independent modules in modular workflows that we can actually adapt to our project, to our system in particular. Modular workflows of computational workflows. So for example, if we take the different parts, we can put them together as our LEGO pieces. We can swap them together, if we swap them and use them in different order. And we can also combine them as many as you want. You can have a workflow of like two or three steps or you can have 15 or 20 steps depending on the system that you are wanting to study. And you can also repeat the quality steps. So to give you an idea of the model that we have implemented so far in ADOC-3, we have the topology model that converts the PDB into the topology needed for CNS to work. So CNS is the computational engine working under the ADOC program. We have sampling modules that sample the conformational space of where the models can work around and interact with each other. Then we have refinement modules that work on minimizing the interactions in the proteins. We have analysis modules that do all kinds of analysis regarding clustering, evaluation, energy scoring and everything. And then we have also scoring modules that provide additional scoring immunization and refinement. So the idea to take from here is that while in ADOC-2 there was a recipe pipeline that was parameterizable, now we have all these little pieces that you can combine together independently. And each of these little pieces can be configured by its own parameters. For example, some of these modules here have 10 or 20 parameters, but there are others. For example, the rigid body module has almost 1000 parameters. So it can allow very novice users to use ADOC-3 very easily, but also if you are a very advanced user and you know very well how you are using your data, and now you want to generate your models, you also have lots of parameters that you can use. And importantly on ADOC-3 is that you can actually wrap around third-party software. So ADOC-3 shell is built all in Python 3. And each of these different modules are surrounded by an interface, a common Python interface that we have designed. So it's easy to integrate third-party software into the streamlines of workflow. For example, we have LightDoc and GDoc that you can Google out and look for them. So these are all the software that provides different approaches to do rigid body sampling. So you can actually combine different approaches to do the same task with different methodologies and combine them into the ADOC-3 pipeline. So we provide both natively implemented modules and modules that rely on third-party software. So how to run ADOC-3? Now that I told you all these, how do you run ADOC-3? So currently ADOC-3, I forgot to tell you, but we are in the beta-beta version, so we have a version that is stable and it's working, but we don't call it yet production ready, but you can test it and you can use it and it's working. Now right now we support command line interfaces, so we have to run ADOC-3 a command line. Nonetheless, we are collaborating with Stefan from the eScience Center in the Netherlands. He's developing a web interface to help you configure the ADOC-3, the ADOC-3 run, but that's something working progress. I'm not going to tell you about this, just to tell you that it's something that we are working also on. So how to run ADOC-3? You run ADOC-3 with a simple command line, ADOC-3, and you give a configuration file. That's all that you need. And if everything goes well, Captain ADOC will think for a while and will give you a results folder where you have all the results of all your run. So we are going to analyze a little bit of all this. So what is first this configuration file? So the configuration file that I just told you, it's a simple text file that you can open on your computer. It's a blank file and you can start filling the test file with what you want to do with ADOC-3. So the first thing that you need to do is to provide what you call the mandatory parameters. So those are the parameters that you need always for to run ADOC-3. One is the run directory, which is the name of your results folder. And the other one is the molecule. So a list of files in your computer that represent the proteins or the DNA or the proteins and ligands that you want to model. So in this case it's molecule 1 and molecule 2. So these are the input models of ADOC-3. So I always remember that ADOC-3 wants to model the interaction between two molecules. So we need to provide the two molecules separate to have a starting point to work. A second layer of parameters is what we call the optional parameters. We have many, actually just one, the number of cores. So this is the number of processors that your computer is going to use to perform all the calculation. And then you go module by module defining which module you want to use and what are the parameters. So the first one here is called the topOA, which is a topology module. It's a bit of a mandatory module here. And then you go, for example, saying then I want to do rigid body something. And after that I want to do a copy evaluation. So you see you're going to complete the configuration file from top to down, exactly with the steps that you want to use on your work source. So it's very visible and very easy to just know what the problem is going to do because it's going to follow the text from top to down. And I told you that we have modules that have about 1000 parameters, for example, a rigid body. But here you see that we only define three parameters, tolerance and big filename and something. That's because that ad hoc3 will always use the default values for all parameters unless you tell ad hoc otherwise. So this is how the configuration file is built. We try to build to have it as easy as possible to facilitate the most user experience. And that's what you need to run ad hoc. So now let's talk about, this is a configuration file that goes here at the command line, ad hoc3, my conflict file. And now let's see a bit how is the result folder organized. So if we open then the results folder, you'll see that there will be a lot of several subfolders inside. The first one being so amp, and these two folders will perfectly match your configuration file. So you see the zero top A is the first step in the workflow, which is the first module that you have to find in your configuration file. Then you have the one rigid body, which is the second one and so far and so forth until the end. And the data folder is a copy of all the input files that you have saved. So other copies of the input file that keeps them as a record of traceability. Then inside each of these subfolders, you will have several types of files. The most important ones that I will tell you about is the resulting PDB model. So we have the PDB that is the result of the simulation. You have also other module specific file that I will tell you more in the next slide. And then you have other modules that help the software to go on. So we have the CNS input and output scripts. These are the input and output for the CNS engine. You have also a parameter configuration file, which is basically a record of the parameters that you used to run that module. And there is this IL-JSON, which is for internal users. I will tell you a bit more in the next slide, but it's for internal users. And all the folders will follow the same structure. So it's a program that is very, not only modular, but scalable in that sense. So all little pieces of the software follow the same patterns. So we do provide some example already. We are always working on them and building more documentation and more examples. But if you go to our GitHub repository, inside the GitHub repository, there is an examples folder. And that examples folder is composed of several subfolders with examples on how to use a lot free to model different biochemical situations. For example, you see talking at the body and the jet brought in DNA, brought in normal trimers. So there are several situations and we provide example on how to use them. So inside each of these subfolder types, you have several configurations. And I want, if you are testing a field or to learn how to use it, see how it behaves, you want to focus on these configuration files that end up with the test suffix. So these are configuration files that can be as complex as needed, but they have very little sampling, which means that you can actually run on your laptop without much of an issue. For example, the full ones, you will need a cluster because it will take several days to execute in a cluster because they provide the execute full sampling scheme. Obviously, the result that you obtain from the test configuration files, they don't have actually, maybe they don't have a biochemical significance because the program didn't have enough time to sample for a solution. It's just a very short sample for you to see how the workflow goes on. But the files and the folders are not structured if maintained and is valuable. So for example, if you have installed out of three, and you want to test one of these examples, you will simply navigate to the folder inside your computer. I've got three examples of docking protein protein, and you'll just run the configuration file out of three docking protein. And you can always open the configuration file and edit and read it and study it and learn from it. So I want now to indicate a little bit more of the examples to give you a sensation of how modular is out of three. And now you can use the modularity of out of three to actually adapt the protocol, the simulation protocol, the docking protocol to the needs of our system. So here I'm going straight to the examples docking antibody and the gen folder that's inside the folder I just show you. And we provide four different examples, test examples. I don't want to go into the details now here about the naming, which is just a very hardcore name, just the same name of the file. But let's take the first one to see first, and then we'll compare the first one to the other one. So in the first one, if I represent the workflow in this puzzle pieces, as I showed you before, so you see that we have 12 puzzle pieces. So actually, we want to model interaction between the antigen and antibody. And in this case, because it's a more complex interaction, we want to actually perform several files and several steps and several checkpoints in between. So we actually designed the workflow that is 12 steps long. So the difference here on this workflow, again, compared with our tool, now we can mix and match very different possibilities. So with Arrows, we have simulation workflows, simulation modules that actually perform movement of the coordinates and move molecules around. In Circles, we have some analysis modules that read the results from the previous step and analyze them and provide you some sort of reports about the performance of the previous step. And you can see that we have them interleaved with cluster insulation modules, which is the one with this purple square here. So you can actually filter previous results before sending them to the next simulation module. This was something that was not possible with other tools at all. So if you start with generating, for example, 1000 models for your first simulation part, you'll need to go with those 1000 to the end. If you need that option, that's not the case anymore. You can sample, for example, the second step, the green one. With the rigid body, you could sample 1000 or 10,000 and then select, for example, just the best, that's just the best 500 because those are really the best if you want them to expand much more computational time to actually refine and get the best results around. So if you represent these in a little bit of a cartoon animation, you see that in the input you enter with the two individual models. But then you have several checkpoints in between that you can inspect and get a preliminary model of interaction if you want to see them. But what you want in the end is on the right most part of the slide that you actually get your last model. But if you're experimenting with the workflow, it can also happen that in an intermediate step, you already got a very nice result that didn't perform so well with additional steps. So you can take the intermediate result without three, you can take all the steps and you can inspect all the steps that happening in the workflow. So now if we compare this first example with the other, for example, I just told you before, we have the same kind of approach. In all of them, we want to model the interaction between the body, but because it's the new system and as researchers, we don't know yet how to explore the system. So we can change the other workflow, putting and taking apart different parts of the different pieces of the workflow to try to get the results that best fit our data. So for example, in number two, we remove these two blue and red pieces, which is a cluster selection and clustering selection to select on the best model instead of clustering them. So for example, in number three, we add here an additional on the on the red circle on the number three. This is an additional refinement model to try to refine the the our systems in water before continuing selecting selecting the clusters until we provide until we reach the final minimization step. And in the first step you see that we that we don't cluster anymore at the end, but we take all of the most so we see. We can actually combine the different pieces separately and you can actually arrange them the best of the way you do the best way you can you can explore which is the best way to arrange them to in order to obtain the best the best model. Again, each of these pieces is a model is a model is a simulation or analysis model that by itself as its own parameter. So you don't need to get worried because these are too many puzzle pieces, you can also assemble a workflow just with two steps with the topology and then with a scoring because you have some model that some that you generated with another program or some and you want to see how this score following the cap rules or the other score rule so you just provide you just run a scoring method on your models, and you can use out free just for scoring simple two steps. Again, just to review a little bit of all the implemented models that we have, and we are continue implementing more so we are still reporting more features from out of two to out of three. So we have the main topology one which is needs to convert to PDB to the profile readable by the CNS engine, and then we have rigid body sampling, flexible refinement to do energy minimization, micro dynamics, sidechain minimization. And then we have several analysis models that provide you with reports and tables, and also additional minimization scoring minimization models. Again, we are always in constant development, we are better version, better version to working, but we are constantly on the daily base, providing additional features. So now that we talk a little bit about all these, that you can combine the different modules we have seen how to run ad hoc with ad hoc three command plus the configuration file, and then you get the result folder in the end. I've show you a little bit of the results folder, but now I want you to navigate more into the results folder to show you which kind of files can you expect from inside the results folder and how to see them, how to visualize them. So let's start with pipeline, the one I show you before so we have an initial two molecules that you have your input and you expect to get the module at the end. So let's focus first on the three main steps, because this is a long pipeline, so first on the three main steps, so the first thing we do is a lot of pathology generation, then we do a rigid body sampling, and then we do a couple of operations. So what I'm showing here is how you can visualize and inspect the files that are created by ad hoc, but everything is handled automatically by ad hoc, you don't need to be moving files around for others to continue on. Everything is automated. This is just once finished, you want to inspect what's happening. So if you open your results folder, you'll see I'm showing you here just three, this three sub director is related to the first three steps of the workflow. And if you open the top way, third folder, you will find this file for this particular example that we have on our examples folder. So I'll call your attention to the in and out file, the input and output file, these are for the CNS, so the input script that runs the CNS as the output log of the CNS. If you are not developing CNS or you are not so deep into CNS, which mostly nobody is except for our group. In regard to ad hoc, you don't need to worry about these files, it's everything is okay, and if you have something goes wrong, you can use these files to communicate with us for debugging or logging purpose. What you want to see or what to use later is the PDF file. So this is ad hoc prepared PDF files taken from the initial input. The PSF files are the topology files that are generated to be used by the other simulation modules. Again, as I told you, there is the IO JSON, which is for internal usage is a file that we use to communicate between the different modules and the parameters files that you can use for your traceability and logging purposes. These have all the parameters used by this module. So if you take these parameters, the same input you should be able to run the same simulation again. So if you go to the next step, the rigid body sampling, the rigid body sample will take these files from the previous module. I'm showing you the files of the top way of the first step and highlighted in green are the files that is the second step. So the rigid body will take for its population to take the PDBs, the topologies and the IO JSON. And you will end up producing its own IO JSON, its own parameters. And what's important, the PDB file, the PDB containing the interaction model of the interaction between input models, considering the rigid body sampling parameter that you gave. Why it says 10 over 10? Because in this particular example, we asked for something 20. So I'm showing you the 10, but you have from one to 20 in your output folder. So I say that is the IO JSON, the parameters configuration file that has all the parameters for the rigid body docking module. Then you have also the input and output for the CNS. You have one for each of the sampling model and also a seed for reproducibility. So if you run out in the same machine, the same seed, you should have the same result because this is reproducible. So when you jump to the next step to the capri evaluation model, we are actually going to read the PDB file. So you take all the PDB results from the previous step, I mean, not you, the adult takes all the previous results from the previous step and runs capri evaluation model and give you two tables. And these two tables, if you open them, you'll see that's the list of the PDB file that was generated in the previous model. And now they are sorted by the scoring according to the adult score rules and also many other different parameters that are filled and used depending on the workflow. You can have more or different columns worked out. So, and then we should go to the next module when you say, okay, I've done my rigid body. I sort them in the table so I see what the performance of the rigid body step and I see which is best or not. Now I want to cluster all these modules into the different clusters, similarity clusters to see which is the cluster that performs. This will be the first step, the blue one, the cluster XCC model, when we give you a table with a different cluster here because it's just for demonstrated purposes, each cluster contains just two structures because we are just something 20 structures for the cluster is on two structures, and there are five structures. One of them were selected for clustering and the other 10 were discarded. But if you are sampling one 10,000, you can see here that you can already very early in the workflow, you can cluster your results and go on the pipeline just with the best results that our work has produced. And then you can combine with the selection of the clusters so you can then give an additional step to select the clusters that you want to continue on forward. So you see that we have separated this in many different steps and one may quickly see why so many different steps are not combining all together in the model one that does everything because that's exactly the opposite. And despite I'm shown here as an example, we actually have several situations where you really want this to have very separated possibilities so you can actually combine them in the best way possible. So when you go then to the flexible refinement model, what I wanted to take from here is that is this message that I'm transferring to you that you have start producing 20 models in the rigid body sample I say 20 because this is an example you can generate 20,000. But then you use these two modules, the blue and red cluster into the selection and then reduce all your models to just the 10 best that you actually send for the flexible refinement simulation, which is actually computational much more heavy. So you don't need to do like the requirement on all the rigid body sampling because many of the solutions of the rigid body sampling by definition will not be the best so you can discard them, and you can select the best clusters to send to the flexible refinement. You can also not select by clustering but you can select just by model without clustering is also possible. As I show in the previous slide. So we have several selection mechanism that you can use to select the different steps. And the rest of the pipeline I'm not going to show you for the sake of time but the same idea applies to the next steps of this pipeline. Let's make a pause to wrap up everything that we said so far, and I want to talk a little bit more just on the best features we have many features that I'm going to show you just one. Because it's nice that we have a modular workflow and a modular software that you can use but talks actually also want to treat the results in a modular way so you want to move folders around easily. So for example, let's talk about continuing and extending a run so we have run out three for your system with a configuration file in your results folder, we all be self folders. And for some reason, maybe the cluster broke or you misspelling miss type of parameter, something that you realize was wrong, and all the steps from the force onwards are wrong by some mean, so you don't need to run everything from scratch. You cannot tell other run my configuration again, and restart from step four and if the results folder is still present. I don't will manage it to continue on from the force step onwards while keeping all the previous results intact. This is good, why this is good, this is good because some of these steps will take days to execute and you don't want to repeat those calculations all over again. So cluster time is limited by our time is limited so the most you can reuse previous previous results the best place in one of the options and we have several other that you can be invited to read on the documentation or contact us but just wanted to show that one of the possibilities that we can actually use to have flexible way to interact with the program and with the results problem. You can actually also profit this functionality to edit the parameters so if you want to edit a little bit more of the configuration file after the step 40 can actually use it and you don't need to repeat it back to the same can use it to modulate the parameters because you just they are for what about having a module I don't want to do this anymore so you can use it in this way. And this is one of the advanced features that we have that that's that you can profit and we have many others. We are showing one common line but actually a lot is becoming a battery of common lines. So we have the main one which is out of three to write it to type on your terminal out of three minutes age you'll get help on how to use out of three which often execute the run. But then we have others for more advanced users or integrate users and other ones that actually so the bottom two ones I'm not going to explain them for the sake of time and simplicity but for example, but I want you to understand that they have several layers of complexity so to speak one for for for main users, others that help many users and can provide additional features for more advanced users, and then there are others for people developing methodology that you want to use other to develop methodology and the program that we provide with benchmarking and pipeline from benchmarking and other tools. So I wanted to keep this idea in mind. So I'll just start testing and using it so I told you a lot about three so basically this has been not one year more than one year work but this has been mostly the last period and really we went from a alpha prototype to a beta version to production almost production right So the first thing you have to do is if you want to test this is to go to the out of three repository and follow the instructions then so if you read this you take the written it tells you how to install the repository. The instructions are very simple. You just have to follow the commands on this way and when contacts the group supervisor Alexander bovan to and ask for the CNS engine. So this is something inheritance to Adox CNS is a problem in language by itself. And the licenses is not completely free for academia and for research but not completely free to do it. So you have to ask for a copy of it. So just start and they're not wrong with that. And it and also installation of Adox three tells you how to how to install CNS and how to to configure everything. And then we still better version so we don't offer those documentation online yet but we tell you how to how to generate the documentation of Adox three locally so you can run a command line and generate the HTML in your computer so then you can open the browser. You can navigate the documentation of the project all around while offline. And if you want then once everything is installed you just want to run Adox three and you are in. We also we are we provide we are working on this is she's not as developed as the command line interface but it's very well developed already. We provide a Python interface so in the very near future if you want to use if you are a very advanced use of Adox three and you want to profit from these features and use it from the from the direct Python prompt or from Jupyter notebook you can actually import the different the different functionality of Adox three and and use them and use them this is still working in development but it's it's very advanced with the interface that are very functional from within our system, but we now working on to have them more user user user friendly for users to use them. And if you want to contribute to our projects again to go to our repository we have a contributing file and the best way to contribute right now is to help us on the documentation so just read out the documentation what's missing what's not missing there are many parts that are still missing as I told you we are still better but but we have we do have a lot. So if you just go online generate the documentation locally not to get around and you can tell this part I can understand this part I cannot understand so well. So this is this is will be very helpful and if you have any coding skills and you want to participate. You are also very welcome to participate we provide instructions on how to how to do that how to interact with us so very very brief on the future developments. We are about to finish a whole preprocessing module for the two Queen and arranged input PDBs because PDB files as you know, I'm very varied in the features for the big so we need to arrange them and correct them before giving them to other three so we are about to finish a pipeline for that. Also reproducing what was done and out of two and with additional features. We will continue to porting experimental restraints support supporting experience of experience from out of three. You want to work even in a more advanced feature like workflow, workflow branching and matching back so you can actually test for example multiple multiple modules for the same approach and then merge all the results together. She's more advanced we are working on that on that and continuing developing documentation and tutorial. And if you want to find us obviously not all group but if you want to contact directly with me or contact with the with my supervisor with accent of over here is our contacts and again and never enough I want to send the whole other thing just to finalize this presentation both past and present members of the group. And each person is working on specific specific project. I'm developing out of three entirely on my full time, but there are also other people Marco and Rodrigo which are helping also a lot with out of three development, and also Brian who has developed out of three the early stage of three last year. So big thanks to all of them and these things to you and always open for questions and discussion. Thanks for the talk Joel it was very interesting. See if anyone has any questions and you can type them in the Q&A panel which you should see at the bottom. So we have we have a couple already so we'll start with first one from Najib which is, I'm curious. Is it possible to use only experimental data. Good question. So the answer is, yes, so there is always a part related with the so so we have both things you have the, the energy functions that rely on physical terms and you have experimental data. You always need the physical terms because you don't you cannot allow the item to clash or cannot allow physical things to happen. But you can solve those things only based on the physical terms, or you can aid the sampling. That's the question you can aid the sampling using experimental data. So if from your experimental data you know that two residues must be goes apart to a certain distance or a certain distance range. You can input that on out of three, the same way you put in out of two, and during the sampling parts. Adopt will take this in consideration if even if those residues start to go too far apart or too close apart and that's not what you want. You will penalize those models and you'll give the models that best fit with your data. And almost all models, all simulation models Adopt, rigid body and flexible refinement and others allow you to input experimental things. So we now three we do not support all kinds of experimental things that were supported with out of two, but we do support many already. So you now have a question from Hugh, which is, can you give an example of the scale of problems that can be done on a single workstation so one or two GPUs and therefore benefit from a local install without needing to use a cluster. So that's a very good question and a very general one for all kinds of simulations. So I'm going to answer it in the general terms of any simulation software. So the more items we have on your system, the larger your protein or the larger your molecule, the more computation demanding it will be. So we one or two. So, actually are calling about GPUs. So, so the larger the protein, the larger the system, the more it is, because I read CPUs are thinking about in a local in a local laptop. If you are going to run in a local laptop, it's going to be very, it's going to almost impossible. For example, I have a good laptop and to run the test examples with a very simple something already take half an hour and laptop burns. And I consider my laptop who's only CPUs, but it's a very good laptop and less. How can I be the system that you can run on the GPU? I think you can already run a good system on the GPU again. I don't know exactly the numbers for that, but I think you could be able to run already good GPU. However, we mostly CNS mostly supports CPU. Okay, I don't think we have support for for for GPUs on the CNS on the CNS level. So I think we mostly support we support CPU CPU. So when you run calculation on 40 or 98 full nodes with CPUs. Okay, thanks. Yeah, so I guess I will slightly follow that out with so yeah you mentioned in the input script you can write the, the number of cores to use I think was one of your examples so by able to comment on what the sort of the parallel programming scheme is behind the program so like MPI or MP or So we have we So we have we have several execution approaches so you can you can run it on the on your local machine simply with with separating the tasks into processes with the normal mode by the processing library, but we also have MPI implementation and implementation where we actually can spread the calculations over different nodes or in the different different core into a single node. So I didn't show you but one of the parameters that we can configure on the configuration file is the is the actual the execution mode. And one of them is MPI see local and and depending on the one you do other to create the scripts automatically it is something to be sent over the cloud and send the job to the cloud automatically. So depending on your on the system that you are running, you can configure them that the run according to that. Interesting. So the most. The best person to answer about this implementation is Rodrigo. So, so if you ever contact him so it's the best purchase is the one who has been implementing the multiple execution types of. Cool, thank you. So if anyone else has any questions. You can put them in the chat. I think if you want to raise hand and say a question then you can also do that. If anyone wishes to. The presentation was clear. Again, it's intensively been using the other three we are now using other three to score on the copy evaluation and. Again, it's a bit of version but we have several features I cannot tell you all in this in this time for the sake of time but we do have several than we are. If you go to our repository will see that almost on the daily basis we have new features new corrections updates coming to the to the program so it's really actively developed being actively developed. I do do you know sort of what the roadmap going forward is so like when's the when do you plan to get it out of the beta stage into sort of full release. So the final beta version will consider like stable beta version so to speak, we have to finalize it by the end of the fire. So this five weeks for block by the end of June. And honestly I consider the programs very stable right now. So what it does, it doesn't well. And now we'll be in the future will be adding features as, as a necessity comes to the more system to test we as an old software I was saying oh here we better play with the table that reports these or it will be better to have a feature that allows to have more flexibility on handling the results this way. So we are adding those features that we go on the road, but I will say that by the end of June when you release the first stable be that we can consider as a stable stable be then it will be normal software developers also that are, are being actively developed and actively used. Yeah. So if someone had a feature they wanted it wasn't there would they be able to go into the GitHub. Like asking for. So, let me to go back in the slides. So, can be this one. So, so definitely to go to the, to the GitHub folder you see that the issues and the poor request are very active. You just go to the issue, and you, and you write something. If you write your request your experience if you found a bug if you want the features to be implemented. If you have more privacy to speak you can always talk with us by email. You can also email us if you have some sensitive data of your project anything can always talk with us by email. Otherwise, if you think it's just something that can be openly discussed, you can go to the issue. We have everything organized in tabs and categories. You tell us and we will address it. That's, that's how we are doing that even internally for ourselves, who's in the issues to organize our work and to organize our daily work so it's very welcome for anyone to do that as well. Yeah, thanks. Okay, so I don't know might be all the questions for now. I might hand back to Alexandra and she can close the webinar. Thank you very much. Thank you very much to Joao. Thank you for the presentation and thank you also to all the attendees. And I want to thanks also for all the season because this is the last by Excel webinar that we and then maybe we will start again in September, but now the season is closed. So I thank you everybody for attending. And I hope that we can meet again next September. Thank you. And now I close the webinar. Thank you very much. Bye bye. Thanks. Thanks for everything.