 So welcome to the Biaxel webinar number 52. Today, as a presenter, we have Paul Bauer and Berkes from the Royal Institute of Technology, Stockholm, Sweden. And they will speak about what is new in Gromax 2021. So something about the today presenter. Paul is a Gromax developer manager. He finished his PhD in 2017 at the University of Sala on computational enzymology with Lynn Kammerling. Then he decided to move to Stockholm, where he worked as a scientific programmer and researcher in the group of Eric Lindahl in the Institute of Technology, KTH. And he started to work on Gromax. Then in 2019, he became a developer manager of Gromax. Berkes is professor of theoretical physics at the Royal Institute of Technology. He designed a lot of algorithm for Gromax simulation package in the last two decades. His current research focus is advanced sampling methods, aggregation of molecule, and wetting of surface at the molecular scale. And they will tell us what is new in Gromax 2021. So I welcome everyone in this afternoon. And as Alessandro said, I'm just going to give you a quick overview of what kind of new features we have in Gromax 2021, the features released a few weeks ago. And also some things that we're currently working on and interesting development to come. Wilkes is going to focus his part of the webinar on two of the new features where he helped to properly implement them. And as he is the expert in that field, he will speak about them later. So what is Gromax for? Well, it's the one simulations. And usually we think about simulations, biomolecular simulations. But it can also be used to simulate basically any system that you can describe using the algorithm we have implemented. So here you see a nice membrane system that is being simulated. But you can simulate everything from common nanotubes to macromolecular assemblies. The good thing is that the thing we want to achieve with Gromax that you can just run a simulation of something interesting like this transmembrane receptor in a large box with multiple ligand molecules and surrounding molecules. Not having to worry much about how efficient this simulation is because we should take care of this. And not having to have too much hassle in setting up simulations, trying different algorithms, or playing around with system settings until you get things right. All the things that we have in Gromax is, of course, hopefully documented in our manual. And I will always ask people to please read the manual first and release notes to know what is for the new features. You will find all the information there always. And you can also find information about the implemented algorithms, about some limitations that we know about, and things that we are not working any longer on, or that we know that are not working at the moment. You also will always see what kind of bugs we have fixed in the recent patch releases or the major releases. There are also previous webinars that showed the capabilities of the previous Gromax releases. I just have the links here to the two recent ones. And if you're interested to see what changed in between those versions, I welcome you to check those. I think they're also quite interesting to listen to. Now, we have a quite strict release cycle for Gromax that we try to have one release every year. And we also strict that we have this release at the beginning of the year. We tried first to have the releases at the last of December or first of January, but that didn't really work out. So we had this release a bit more today, based in February. And we have plans of attached releases that will take care of a few bugs that we have already investigated. During autumn, we will switch our main development to the 2022 branch and hopefully release a number of beta release candidates to prepare for the 2022 release in early next year. Just a final note for the 2020 branch, we're just expecting to have maybe one more patch release coming up now in February and maybe a second one in April if we find some bugs that are simulation breaking or effective physics in a bad way. But this branch is now officially only supported for things that are of major concern to us. And we want to focus our main efforts on the 2021 branch and all developments for the future. Now, I actually didn't change this slide from last year, but I just chose that Gromax is quite a high impact. And I think I should change it before we have a new release over now or something, because we have also a few more publications to mention Gromax and that's sure what you can use it for in the end. Not to what we're actually doing. Gromax is not just developed as something that we do at KTH with a few programmers, but it's part of a larger collaboration with multiple projects, multiple external groups. The development through BioXL is one large part, but we also have several code design projects where we work with hardware vendors to make sure that we have proper support for new hardware before, hardware hopefully release, and also proper support and proper performance on new hardware systems. And we work with other groups to implement new features at the same time. Now, for the main part, what we're actually doing. There are a few new features in those. This is just taking directly from the release notes, so if you want to go there, you can read the same thing there. But the main things that we have developed now are mentioned here, and we are gonna talk over them quickly now over the course of this webinar. And I think we'll start with the multiple time stepping and for this I will give the book to Boeck, who helped implement this in this expert there. Okay, thanks, Paul. So yeah, I have the honor to introduce the first two new features of Gromax 2021. So the first major one is multiple time stepping or multiple time step integrator to be more precise. So we haven't prioritized this in the past because we always were using virtual sites to replace hydrogen masses to use a time step of four femtoseconds, but this is getting to limits. So we now finally also looked into multiple time stepping that many other codes already have. So I'll show how it works, well, how it fundamentally works and then show how you can use it in Gromax 2021. So the main goal here is to improve performance by calculating some parts or some part of the forces less frequently. And the scheme we use is the standard, our Respa reversible and sublactic integrator. So the idea is that you split the potential in the fast and slow varying parts, which can either be because some atoms move faster or slower or because the forces themselves actually very less with the motions of the atoms because the potential is quite smooth. So if this is currently implemented only for the LeapFrog integrator in Gromax, which is extremely simple scheme. So you only change the velocity integration, not the force integration. So at every end steps where N you can choose, you integrate both the fast and the slow forces. So you need to compute both as you would normally do. And then you simply add in into the integration the slow force N times. So the slow force gets added multiple times or effectively has a larger time step. So you can also write N times DT here. Every other step, you only integrate the fast force. So this is for a two level scheme where you just decompose one slow and one fast. You could have more, which is not implemented in the current release. So this is extremely simple and it's shown here in the right in this diagram where you have the in blue, the slow force and the red, the fast forces. So the fast forces are added every step as normally to integrate the velocities, which then go into the integration of the coordinates. But the slow forces you simply add up twice in this example with N is two where this factor or this you do it every two steps you compute the slow force. So you simply add them in twice. So you move, you impulse the slow forces. So actually it's extremely simple if you look at this, but this has been well thought out, not by me or by us, but by others a long time ago now. So the only slightly complicating part here is that the constraint variable is more complicated because the constraints forces are inferred from the displacements, which now depend on in this way on forces with different pre-factors. So one has to do some tricks there, but that's correctly implemented. So this is the basics here. And then one can choose which forces are slow and which are fast to make different trade-offs of performance versus accuracy. So you should choose the slow force as well, such that you don't introduce all of integration error by integrating less often. Okay, so how this is done in practice is through some MDP options. So by default, multiple time stepping is turned off, but you can turn it on by setting MTS as yes. And then the other options listed here are default. So we default, we have two levels, which is only the number of supported currently. We might add more in the future. Then the default setting currently for the what forces are treated for updated less often, it's only long range non-bonded, which means PME grid forces, which is usually PME for Coulomb, but it could also be for Frida and Jones, if you use PME Frida and Jones as well. And the default factor is two, which is the usual kind of setup that other codes also use where you update the PME grid forces every four seconds instead of two. So what this standard setup gives, it gives small to moderate performance gain in most cases, but it's free so to say, because it doesn't cost you accuracy in most cases to integrate the PME grid less often. It can give higher performance gain at high parallelization because there the PME grid calculation can be, well, it will dominate the communication cost. So there the gains can be much higher. So that's one of the main areas targeted here. It can also, one other example, it can also be used for compul forces, which can also be expensive, especially in parallel simulations where you have to communicate a lot. So there, if your pool forces don't fluctuate very, oh, sorry, don't fluctuate very quickly, then that's also an option. So here I can also mention that my original idea here was to replace the large times that with hydrogen's replaced by virtual sites with this scheme since the hydrogen, the virtual sites hydrogen scheme doesn't work so well anymore, especially on GPUs. So one can add more forces to this less, to the sole force group less frequently updated. So there, my idea was to do long range, non-bonded pairs in the hydrals. But unfortunately it turns out that some hydrogens get unstable every hundreds, nanoseconds or a few hundreds. So that's not very frequent that happens now and then. So that's wasn't feasible. So that's certainly not the default. So I'm still looking into this and hopefully I'll get back with more information at some points here. A slightly smaller time step could actually work here. Okay, then the final note on multiple time stepping is things that we might extend support for in the next release or later. So one issue currently is that it's not support. So update on the GPU is not supported combined with multiple time stepping. So this could lead to some performance loss because the update needs to be done on the CPU. In that case for the integration and combining forces. So that we would like to improve because especially for GPU heavy machines, here you could have a nice gain of not having to do PME every step. We would like support for stochastic dynamics integrator especially for free energy calculations. Support maybe for more special forces to decompose and maybe have more than two MTS levels. Although in practice it's often not so useful because more force fields have been anyhow parameterized with constraints on the hydrogen. So you don't want the time step, while you want the time step is anyhow two femtoseconds and there's not much more gain to be had. So that's what I wanted to say about multiple time stepping. So then there's another new feature which is free energy calculations with the accelerated weight histogram method. So the accelerated weight histogram method has been part of Gromax for some time that has been developed by a PSU student working with me. But that was only you could only use it for pool center of mass pooling coordinates. So now we've added extension there. Oh, I should have mentioned the name of Magnus Lundborg on this slide who has done most of the work here. He has extended this together with me to the lambda coupling parameter. So how you, let me first explain how you would normally do free energy calculations which some of you might be familiar with. So to interpolate between two states A and B so it could be a molecule solvated in one in vacuum or it could be a mutation in a protein where the A is the wild type protein and B has a one side chain mutated for instance. You would add a coupling parameter to Hamiltonian called lambda and then you run many simulations that's each at a different lambda value to compute the derivative of the Hamiltonian with respect to lambda or nowadays you would compute differences in the Hamiltonian with respect to lambda and use benefit acceptance ratio. So now that A, W, H can handle this what it can do is it can move lambda dynamically in one simulation and from that get a free energy. So the setup is actually really simple. So you need to set A, W, H is yes. You need to set A, W, H one dim one corner provider that lambda. So you tell it to use to act on lambda and the only parameter you need to set is a diffusion coefficient which tells roughly how fast the system moves along lambda and it seems like one 0.01 one over picosecond is reasonably okay in most cases. So then you get the free energy out of a single simulation. So you run one simulation, lambda moves and in the energy file with the normal A, W, H tool you can extract the free energy you get out of this. So this is really convenient and also turns out to be quite efficient. So as efficient as benefit acceptance ratio method or maybe slightly more efficient in some cases we're working on a manuscript here to show this. Another advantage now is that you can parallelize this with multiple walkers using MD run dash multi-deer. So you can have many independent or semi-independent simulations contribute to the same free energy build difference being built up. So in that way you can get your results faster and use many simulations and more hardware to get your answer quicker. This will be explained in more detail in the near future in a bioxial webinar on A, W, H plus free energy calculations which will be announced soon. So with this I would like to give the word back to Paul. Yes, thank you both. Yeah, I would just go on directly with the main new features that we have. And I think one of the most important ones is that we implement a new type of pressure calving algorithm that has been contributed to us by Joanne Boussi and this group in a really well-executed external collaboration. We were able to implement the Stochastic Cell rescaling algorithm that is a pressure coupling algorithm, pressure coupling method that can be used both for the equilibration and the production parts of the calculation. Just if you're interested in that I would recommend that you check out the people that is mentioned here. And they just show that here what is basically what you can do with it. You no longer have to switch between balance and for equilibration part of the simulation and then whatever other algorithm you have been using before making it easier to use in the end and also reducing the amount of errors we hope that people that either use the wrong pressure coupling method to make equilibration for production just because they forgot to change the empty key field for it. Yeah, going on. Another main thing that we were able to add was experimental support for SQL as a method to offload calculations to accelerate the devices. I have to say this is really experimental still and in the default 2021 release you're not gonna see much of it because we only added some back end stuff for it and not something that the user was in the front end. But for people that are happy to experiment with this new and the feature we have actually a development branch that tracks the 2021 release and includes all the SQL features we have added for some corner and you can find the link here for the link on the slide. And also just if you go to manualbombermax.org they find a link and show the explanation what this branch is for. We want to explore SQL further because we think it's gonna be an interesting approach for offloading if it manages to target different kinds of celebrity architectures. And we also hope that we can use it in the future to provide support for AMD GPUs. So this can be done with OpenCL. You want to see if we can use Hipsicle and put this in step instead of having to add another GPU port that is FAMO specific by using the Hips download. Yeah. Then something that is a main feature and hopefully gonna make people happy that do free energy calculations that they put to chargers. And that is that you can now offload PME calculations to GPUs if you do free energy calculations with charge perturbations. This means that for simulations that's currently only able to use the GPU for normal calculations and then had to do PME on the GPU that can be quite slow because you have to calculate multiple grids for charge perturbation. You should expect a major performance gain from this. This has also been contributed mainly by work from Markz, so I want to talk this with you again. And we hope that will be of major impact for people that do the free energy calculation. One last thing when it comes to half a support that is that we have extended also the port for the arm architecture by adding SVE scale of the vector extension support. This has again been contributed for external collaborations from RSI T and JEPA. And the only thing that you need for this is basically a compiler and toolchain that you should be able to get if you are interested in building and running on arm. And for now, you only get the ability to choose the SVE vector size at configure time requirements but you can configure it for any value that is fitting for your own device. We have been in talk with the people from RSI T and the contributor that help us implement this in changing this to maybe have one time support for the different vector sizes but that is still something we're discussing and not fixed yet. And just as a final thing is that you now have a full fully functioned number in the Action API and also listed forces in the Action API with MBLIP, an external project in collaboration with PRIS. This ships now with Gromax 2021 and you can use it to try program mini apps for testing algorithms. You can try it out using your basis to set up topologies and then interaction times. And we hope that we can extend the use of this in Gromax by adapting our own topology formats to it and getting away from the legacy input processing and also simulation setup that can be hopefully simplified with this kind of future API. Good enough with new features now about things that we removed and it's a good thing that we haven't removed anything new yet. For this year, last year we removed by the lot because the group scheme got finally removed after being deprecated for years. So for now, there has been nothing new completely removed from Gromax but there's still a few features that are not working because we haven't added the support for them yet and this is mainly membrane-abandoned embedding that we hope to get back at some point. And we are sorry, even though we promised that people tried to get it implemented again, we haven't been able to add the use labels for the non-bonnet interactions yet. This is working purpose and I give you a promise that it's gonna be in 2022, come have a little high water and that you will finally be able to supply your own interaction formats again for this. There are a few new requirements for this version. We have been a bit aggressive when it comes to the C++ standard that we required that we now require C++ 17 after requiring C++ 14 last year. This is mainly because it makes our life much, much easier when it comes to development and making sure that the code actually works. We still require only CUDA 9 that the C++ 14 compatible. But we are thinking about bumping this requirement for the next release, again, to make our life easier and making possible for us to support these kinds of. Another thing is that you need a slightly more modern CMake now to run to configure and compile code products. It's not that new and you should get this version with any recent operation system in install that I can think about. There are a few things that we have deprecated for the next release and I think some of them may be a bit contentious. One of them is that we really think that the default name setting for output files will be removed or should be removed because it's providing nothing but hassle for us and makes it very difficult to robustly implement restarting and file handling on a very on a low level. So we think that we have to remove it and maybe provide different kinds of options for it. Now I think that has been officially deprecated but will not be removed probably yet is OpenCL. We hope that we can provide support for devices that are currently targeted by OpenCL using the SQL standard and something like a SQL but as long as OpenCL is needed to run promise on those devices at this day by the moment you can remove it, you will remove it because it's very difficult to maintain it in pass and it's using a different coding standard that makes it difficult to use modern SQL features for this kind of GPU code. We also removed some SIMD architecture support for things that are no longer relevant in HPC space. That's not really my business except if you want to own some one of the systems. The thing is it's going to express the one on it but it will use the plain C simple kernel instead of accelerated kernel. Now I think that will go away with the next version is the SDMD1 only built because it has done is it's done its job. It's not on my YouTube because you can run everything through the GMEX VAPA binary and you don't really need to separate any one binary now, do you think? You're gonna remove HWLock because API version one because we want to take full advantage of the features that are available with the API version two that should have widespread adoption now. Some other things is probably not of much interest. The main thing I think is the constant acceleration that has been broken forever and should probably not be used because we can't really say if it's working or not. Like we've got a user report today that says that it's working but I don't really think we can put any effort in making sure that it works again. Yeah, as I said before, we have a few codes and projects that I just want to plug here again. This is mainly with NVIDIA trying to get the support from your computer devices in. We are working together with Intel and maybe soon AMD to get support for Intel GPUs and for AMD GPUs. And of course we're working together with people that are interested in getting GMEX accelerated on ARM chips. Good, we're coming to the end of my slides here in both net revenue with some slides for the long term plans that I probably should have updated because we have multiple time setting support now but we are not able yet to replace completely virtual sites with it. So I hope we can improve ourselves a bit more and get this completely replaced. We're still working on improving support for the Python and DC++ API to make it easier to use GMEX as a library. And we now provide actually a set of GMEX containers. So you can one GMEX directly through Docker or Singularity if you're interested in this and be updated over CI testing. So you can one the same things we have in CI if you want to make sure that things actually work. A few things that are not here on the list like support for newer methods like constant PH, PH calculations. But I think we should just stay tuned for our development and look for the data releases later this year. And if you're interested in following us our development on GitHub, I just like only to have a look there. Good. This is the development of Comax is of course a work of quite a few people that I'm not gonna name here. I just want to plug again the development leads Eric, Mark and David that started the project and have been leading it for a long time. And the people I work together with in Stockholm, question, seller, there's Joe, Arton and I forgot Andre or a new developer to put him under this spot. Yeah. That's it. I hope it was interesting webinar for you. And I think I will give back to Alessandro and Julian for the Q&A session. Yeah. Thank you very much, Paul and Berg for that very interesting talk. Some of those new features that you were talking about are definitely very interesting and I look forward to trying them. While still waiting for questions, I'll take this opportunity to sneak in my quick question to Berg, which is, have you done any performance testing on the two time step method to see how much of a speed up you can get on some basic protein system? So how much faster do simulations go? Well, the problem here is it depends completely both on your system and the hardware you're running on. So in the worst case, you might actually get worse performance because if you have, for instance, multiple devices, a CPU and a GPU and one of them is fully busy and you remove work from the other one, then actually you won't gain anything, but it's a bit of overhead. So then you get a few percent overhead that would be the worst kind of case. The best kind of case is, of course, where you're limited by PME and then you don't need to do it every step and then you can get a high gain. So that's in the, as I tried to say, in a massively parallel case where you're always limited by PME. But even then you might have separate PME ranks, for instance, which would run idle and then the gain is only the wait time on that, whereas if you would be running PME on the same ranks where you're doing other computation that you can actually reduce the time on a rank. So in best case, let's see what's the best case. I guess the best case on a GPU where you would be spending maybe a third of the time or more on PME. So if you can halve that, you could gain a six of the time or maybe a bit more. But in parallel, you could gain even more than that. So the gain is, I think, often not so high, but 10 or 20 percent is certainly possible, but it depends completely on both your system and the hardware you're running on. And it also affects in that sense maybe the choice, how you want to run the simulation as well. So it makes things even a bit more complex. We should automate that, by the way. That sounds, I mean, 10 to 20 percent sounds brilliant on long simulations and large simulation, right? Yes, yes, and it's nearly free, so to say. So yeah, yeah. So if that applies to your occasion, sure, do it, yes. Great, thank you very much for that answer. The next question we have is from Bert de Groot. All right, so yeah, thanks for the great work. I was just wondering about the cost and acceleration. What is exactly broken? We've used it quite a bit and it seems to work as advertised. That's a surprise to us because we also had reports that it's broken and that it's not working. So there are some fields that are just not used by it that should be used. And some, I think that it should some would be a force should be applied or an energy should be checked and it's just not done anymore since 4.6, I think. Okay, I mean, the functionality is not completely covered by the full code, I think. So it would be a pity if it would be removed. The thing is we really have to put me to check to actually be sure that we can implement that it's implemented correctly and that those that the method works as it should. We have to move that now, of course, we can work the removal in master branch. But I would be interested to hear how it's working for you because we have an issue for this and we show that it should not be working because where the thing that does the acceleration on the does the acceleration supplied is no longer done correctly. Yeah, we can talk offline and share the test that we did as well. That would be great. Yeah. Great, thank you very much for that question and the answer. The next question we have is from, sorry, one moment, is from Victoria Hill, are there any situations where the multiple time steps may not be appropriate? Well, there are many, but that depends on how you set it up. So the parameter that I showed, if you only do PME every four femtoseconds, so if you would have a standard simulation setup and you only turn on the option with all the defaults set, you would only be doing PME every four femtoseconds. And I think that's quite standard that many people in the community use other code. So that should be unproblematic, but you could, of course, choose different forces there. You could choose a larger, a larger long time step, so not a factor too larger than a normal time step, but much more, and then it quickly gets inappropriate. So there's a quite small regime where it works. Like I already found out quite late that, for instance, the scheme I wanted that I had planned to use, that that's not fully stable, so you need quite long simulations of more than 100 nanoseconds to see that hydrogen's become unstable if you choose also other terms, other force terms as well to update less frequently. So this is quite tricky. So there's a lot of literature already out there, but I think there's space for more investigation here to see what works and what doesn't. So I'm planning to write the manuscript on that, but that requires a bit more study to see which things might be appropriate to do and what's, and if they don't give side effects. So usually things tend to cause instabilities in the simulation crashes. So I haven't seen any cases where I could get the simulation to run stable for a long time and get wrong results or wrong sampling. It usually resulted in a crash. So that's the good part there. Probably if you would do PME and you would do that, every, like some people to try to do that every eight times a second or so, I think then you actually would get incorrect results which will not cause instabilities, so that's dangerous. Great, thank you very much for that answer. The next question we have is from Karsten Kutzner. Yeah, sorry, I missed part of the earlier question, but so it might be related. So my question would actually be if you do multiple time-stepping and you would use, let's say you would compare a parallel simulation to a simulation with four femtoseconds with virtual sites and would you get any performance benefit and parallel then with PME because you would update normally PME every four femtoseconds anyhow or would you be able to do it less often? No, so the performance gain, the scheme that would be nearly equivalent to performance was slightly faster was the one I mentioned below on the slide where you do BME, the non-bondats, the dihedrals and the angles every four femtoseconds. So that would be about equivalent in performance to virtual sites or slightly faster, but that turns out to be slightly unstable or yeah, unstable very frequently. So that's the equivalent. PME only is a much smaller gain as I answered before. Yeah, I just, okay, just to get around the communication bottleneck and highly parallel simulations, I would think. Well, you also do much less PME, so especially on a GPU, PME can be expensive. So there's some gain to be had there, which can be quite a lot actually. So, but it's not the factor on what do we get with virtual sites, factor 1.7, 1.8 or so. That's getting less and less on GPU. So that's one of the main reasons for going to multiple time setting is because that works better on GPUs in principle. Okay. But then we need to find a good scheme that gives you more benefit than only PME, only transferring PME, but I don't know if there is such a scheme yet. Okay. Okay, thanks. The next question we have is from Luca Monticelli. Again, a multiple time stepping question. Does it have any effects on energy conservation? I imagine that in the cases where it crashes, then yes, definitely and presumably others. Well, it's a symplectic algorithm as I wrote. So it preserves energy extremely well. So it's, you don't see anything in energy conservation in practice, even for the things that might crash quite quickly. So I would say often it preserves energy really good. So of course, if you do things really bad, you might see something, but no. So it's, since it's implecting and you're acting on slow degrees of freedom, it usually preserves energy as well as not using it. That makes sense, thank you very much. The next question is from James Caruthers. How does the performance and precision of the AWH model for free energy calculations compare to standard parallel simulations or expanded ensembles? I imagine of rather than or there. So if the question about precision, I mean, that's a matter of how long you run your simulation. So any proper algorithm or method should converge as close as you want to the exact answer by running longer. So that's also the case for AWH as it is for all the other methods. So you can get as accurate as you would like to be by running longer than the only question is what, which method is more efficient? So the only exception from that is free energy integration where you have quadrature errors which don't go away if you simulate longer. But methods like BAR, normal expanded ensemble AWH, they converge to the exact answer. If you have enough, if you, as long as you throw more and more sampling in. So there's a question of which method is more efficient? Which is a difficult question in general and might depend on the system. Thank you very much for that answer. And I would... Sorry, the question includes also performance. Yeah, so performance as I said, it depends on the system. So we're working on a manuscript and there it seems like it's as good as BAR or slightly better, but that's of course only for the few systems we tried. So that's not a conclusive answer, but it seems to be at least as good as BAR. So that's what I can say for standard free energy calculations or chemical calculations. Great, thank you very much. And our final question is from Yasser Almeida. Yasser asks, have you considered to include the installation of Python API directly during the building of Gromax? Currently, this is done as a second step after you've built Gromax. Now, you can install the Python API directly with your Gromax build. You need to set, I think it's Python packet, minus the Python package on the CMake magic for this exact, yes, this was the CMake magic. That should also be documented somewhere in the GMEX API installation documents, but maybe it's a bit hidden and you should make it more prominent. But yeah, you can install the Python API that is shipped with your Gromax build this way directly. Great, thank you very much. And we have thanks from Yasser. And there's actually been a final, final question that's made it in from Mandar Kulkarni. The question is, can we use the RESPA algorithm? I imagine the multiple time step RESPA algorithm, but that's not specified. Can we use the RESPA algorithm with meta dynamics? That's, I can't answer conclusively, but I don't know how meta dynamics is coupled to Gromax. That's not something we do. That's something that the plume team does. So I don't see any fundamental issues there, but I can't, I don't dare to answer that. Great, thank you very much for that answer. And with that, that concludes the questions that have been asked. Thank you again, Birken Paul, for taking the time to talk us through all of the new features of Gromax 2021 and giving us an update on the state of Gromax now and in the near future. And thank you everyone for coming to this webinar. I hope you all have a good day.