 Hello everyone and welcome to the next edition of the BioExcel webinar series. My name is Rossan Apostolouf and I will be today's host. Today we have prepared a very interesting presentation, something different from our usual webinars, and we will look into some software and techniques for visual exploration of biomolecular systems, in particular, using virtual reality. And it's my great pleasure to have today Mark Baden, one of the developers of these applications. For those of you who are new to the series, I'd like to give a very brief overview of BioExcel. BioExcel is a center of excellence for computational biomolecular research, which was established three years ago. It's an European distributed infrastructure, and we focus our work on three main directions. The first one is the development of biomolecular software, and we work with three leading software applications for molecular dynamic simulations. Many of you are probably familiar with it. We also had one of the most popular docking integrative modeling software, and also CPMD for hybrid QMM simulations. We are improving their performance, efficiency, scalability, and we extend applications with features. We also work on improvement of the usability and improving the productivity of researchers, for which we work with several notable workflow platforms, such as Canine, Comps, AlphaFax, Galaxy to device efficient workflows. And we also finally provide a lot of training and consultancy to both academia and industry by promoting the best practices and training the end users. In interaction with the wider community, we do via several different interest groups. We are there focusing on several sub-domains of the wider area of life science modeling simulations. For example, we have interest groups on free energy calculations, integrative modeling, biomolecular simulations for entry-level users, and several others. So I encourage all of you to get in touch with us to visit our discussion forums where you are welcome to ask questions and visit also our YouTube video channel where we have recordings of already almost 13 webinars. At the end of today's webinar, you will be able to speak directly to Mark and ask your questions. For that, during the presentation at any time, feel free to use the questions tab on your GoToWebinar panel where you can type your question. And after Mark's presentation, if you have working audio, I will let you speak directly. If we cannot connect, then I will read the question on your behalf. And you can always come to ask.bioxcel.eu to our discussion forums for further questions. With that, I'd like to present to you Mark Baden, who is a researcher at the Center National Modular Research and Scientific. And he is one of the internationally recognized researchers in the area of membrane protein modeling simulations, especially applying high-performance computing techniques and bridging the gap between molecular modeling simulations and bioinformatics. He's done extensive studies of membrane proteins in complex biological systems, large-scale simulations. And one of his recent areas of interest and the success are in the area of scientific visualization for which we have invited him today. And he has a lot of publications in high-impact journals such as Nature, Structure and others. And with that, I'd like to welcome Mark. I will now give him the presenter's screen. Okay, hello everybody. I hope this works. Presentation seems to have stalled. Yes, we see clearly the full screen. Okay, give me a second. I think I need to restart it. Okay, so thank you very much for the introduction, Rosen. As you said, my name is Mark Baden. I'm a researcher at CNRS in Paris in France. So today's webinar is about how to benefit from virtual reality approaches for the study of molecular systems. So first, I would like to express my gratitude, of course, for Biaxel for making this webinar happen and to provide all the technical support to implement it. And I guess you may probably have heard a lot about virtual reality lately because there's a lot of hype about it. But this is actually a new topic. So I would like to start to look back because this is not so new entry. As you can see here, early head-mounted displays, for instance, go back to the 1960s. That technology we are able to use today, however, that the hardware is new and better. It has many improvements and provides quite a informative experience. But the roots of applying virtual reality approaches to molecular systems go back to the last century, definitely. So here you have a few more examples of devices and setups that existed many years ago. For instance, down here you can see a room scale molecular docking installation. So these examples are taken from a PhD thesis that's referenced here. And at the end of the 90s, the most popular and widely used virtual reality device was the cave, which is kind of a room equipped with several projectors all around you. And you would be wearing glasses, like you can see here, and have some device to interact. One issue is, of course, the cost and the space necessary for such an installation, as well as the fact that scientists would have to schedule a cave session in advance and need to move away from the usual workspace of activation energy that's needed for that. Another general issue that still exists actually with the current headsets is the difficulty to accommodate several users. So typically, except for a few setups, only a single stereoscopic projection reflecting one user's head position is used to provide a fully immersive experience for that selected user. The other users would have a projection that would not be so immersive. So why do you read so much about the VR nowadays when it has been around for such a long time? I think there are several explanations to that. There are several combined factors that lead to the current situation. First, of course, we had tremendous progress in computer graphics power, much improved hardware in terms of latency and precision of the tracking. The form factors improved, things are not so heavy anymore and smaller. Of course, the prices dropped greatly. I mean, you just saw a cave which cost multi-million dollars and basically such a headset was quite a good experience for just several thousand euros, I would say nowadays. So the VR experience is nowadays much smoother and with a significantly reduced cost. Of course, one must not forget the software. So software-wise as well, the availability of easy-to-use software development kits and integrate with widely-used platforms is one major advantage. So Unity that's shown here is a platform used for designing video games and provides, for instance, access to many of the VR gear that you can buy nowadays. So much of today's talk actually will focus on a molecular visualization tool that we designed in my lab that's called Unity Mall which is actually based on that Unity game engine. So because Unity provides SDKs for most recent VR hardware, it was straightforward to actually extend our initial viewer with the VR functionality. So generally speaking, such game engines, they bear many interesting features for scientific software development. You basically use what is called a project, a single project, where you can generate builds for a variety of platforms among which standard executables for Windows, Mark and Linux, web-based builds, and so on. Further, more actually, the game engine helps you to hide the complexities of computer graphics programming like OpenGL and things like that and it facilitates the implementation of a scientist's ideas. So the majority of contributors to Unity Mall actually are not computer scientists but physical chemists or bioinformaticians. And actually, when you develop a such tool, you can run your application directly from within the Unity Editor, tune parameters on the go, and debug why it is running. So that typically shortens to try an error development cycle. Now, here's a first glimpse on how Unity Mall version of augmented reality implementations might look like. So on top you can see Xavier, our main developer, using a setup with an Oculus Rift headset onto which the leap motion is attached so that his hands and fingers are tracked. And then you can interact with the virtual molecular scene. He sees the same thing actually as shown here on the screen, but of course, immersively in 3D. Below you have an equivalent setup that uses the Vive headset so you no longer use the actual hands but some controllers of that headset for the interaction. And here on the bottom right is an augmented reality experiment using the HoloLens. Basically shows a virtual protein that floats in a seminar room. As you can see on this slide, the diversity of devices that you can use provides quite a range of implementations going from classicalization to augmented to virtual reality. So you can basically choose the degree of immersion and which type of hardware you want. So what would be the scientific motivation to use such tools in research? So this slide... Sorry, it was too quick I think. It was supposed to be an animation. It was not to work. Anyway, that slide is just supposed to show a user in front of his data, right? So I think that's one driving force behind visualization analysis of molecular simulation data to try and make sense of the ever-increasing data deluge and that we try to need to make sense from that data. Another aspect concerns the nature of the very data we are scrutinizing. So molecular structures themselves are complex three-dimensional architectures and to understand their shape requires adaptive stereoscopic if not immersive tools. That's actually not so new. And a long time ago in this galaxy people have used tools like this for teaching, for instance, to be able to convey this stereo 3D shape issues to their students. Because nowadays the need for such immersive data exploration is intensified because we have so many different sources of data of very diverse nature. For instance, databases are an emblematic example of the situation. Many experimental data sources are now easily available and they are usually usefully complemented by databases derived from in-circuit experiments such as microdynamic simulations. Here you have a range of such databases and they are often also combined through different sites that aggregate information in mashups. So my focus today will be mostly on static and dynamic molecular data in particular related to microdynamic trajectories. So let us first recall the typical molecular dynamics workflow. So you would typically start to produce your raw data on a supercomputer. Then of course you need to operate some data management backup and start to reduce that data to actually chunks that you can process that can be handled also typically in the lab because mostly people transfer the data first to the labs. And then the final data processing typically happens on end-user workstations where you have a basic cycle combining the visual inspection and data analysis making a decision for further processing visualization. My PowerPoint just got stalled. So let's see what I can rescue it basically here when it should have entered further processing. And of course if you do this cycle you might have some hypothesis and you might then move back and run more simulations actually and this goes on and on and on. And the idea now, sorry keynote seems to have some issues with this slide. So now why? So let's just look at here. So the idea then basically is to make this whole cycle interactive and to help to immerse the user in his data. And so finally substitute for the neighbor scientist to use advanced algorithms and interactive exploration tools to discover more obvious patterns in that data set. And this basically actually, the visual conceptualization does not actually stop there but it goes on in what I would call visualization driven scientific discovery cycle where you start from your raw data and you do this cycle to come up with hypothesis that you might test. And hopefully you could end up with some insight that you might want to publish and explain to your colleagues. And then again of course we need some visual tools to convey your insight to your fellow scientists and readers of your article or poster by designing of course the best possible representations and capturing the essential aspect of the discovery. So here for instance you might want to find the best viewpoint illustrating a certain feature of interest and so on. So what can we add to these processes? So I would summarize it as I would say immersion, right? So let me define what I call immersion here because that term has different meanings in different communities. First I would say immersion in the structural world of molecules which intrinsically are three dimensional of course and to every other spatial architecture is not trivial at all. So we are decisively helped with this task. The second essential aspect to this immersion is the possibility that you are an actor and manipulate the molecules or more generally speaking the data you're examining are just being a passive observer. So you can see an example here of an arcade machine that we built for hands-on folding of RNA molecules and that we provided to school children for instance. Or you may also know the folded series game that has also been used for research purposes. So before we further delve into such applications let us see whether there are any particular issues with these VR approaches. Because there are a few things one needs to pay particular attention to in VR and that need to be adapted for a good experience. You may have heard about so-called cyber-sickness which is a form of motion sickness. To avoid it, it is very important that latencies are as small as possible. So imagine you are in a virtual scene, you turn your head but the scene only slowly follows your head movement that make you sick very quickly. For example, in our case we had to work on the performance of our GPU shaders in Unity Mall to reduce such latencies. Another important aspect is to provide visual cues of the surroundings. For instance, biting landmarks such as a floor, a room, or skyboxes. That helps a lot to provide reference where up and down is and where you are in a virtual world. In contrast, if you imagine floating in space next to your protein that makes you sick very quickly. So I'm happy to report that we haven't had any cyber-sickness incident so far. With I would guess about maybe 100 people haven't taken a test drive in Unity Mall VR. The only inkling of such a niche has been observed when people have had this headset on for quite a long time at say more than 90 minutes. Then to improve the user experience in VR, it's also essential to adapt the software so that it can be easier to use and assist the user actually in the manipulation. A few examples are related to an optimized use of input devices so that the manipulation feeds intuitive and natural adaptation of graphical user interface and menu system to the VR context and for instance tuning navigation metaphors optimizing them to guide the user. The last intrinsic limitation I'd like to mention and I already alluded to is that some devices in particular the headsets are single user centric. So it's difficult to imagine you would have a group meeting with 10 people each bearing every headset but at least not in the same room. Of course you can unite other types of many people in the virtual scene and provide tools for them to exchange maybe even quite naturally using language and gestures but it's still very different from a meeting face to face which is why also the augmented reality approaches bear some intrinsic advantages. So let me illustrate a few of the points I just mentioned. So here's what I mean by providing context and landmarks for spatial reference which is important use for. You can see here a membrane protein with a five-fold symmetry. So we have a natural direction which is the symmetry axis and at the same time the direction is a function of importance because it is the ion channel pore of this molecule. So if you then define that the extracellular part of the membrane is on top then you have a clear way to orient the system up and down. This is also visible by the background skybox that actually shows the top with the sky and the moon and the bottom with this cloud which helps with the orientation. You can actually also use the information we just discussed for the navigation thereby making the navigation metaphor aware of the context of the protein that we are looking at and using this symmetry as a natural guide. So let us look at a few examples for that which are shown and illustrated here. These are four navigation tools that we adapted to this protein's context. First up here is an external exploration where it may spin the protein around its axis and they want to go up and down, close or further apart. Then for looking actually at the ion channel inside we can use an analogy with being in a lift that can go up and down along the channel axis and in addition it may turn around the axis. Or we can exploit down here the five fold symmetry. For instance if you were looking at an interesting site you could easily jump to any of the four equivalent size linked by symmetry for an easy comparison. And the last example down to the right assumes that you have selected a given residue of interest and then the software calculates the camera pass to get there and ends with the best point of view on that particular residue which you might otherwise take some time to find. So in these examples we did not change anything on the protein structure but you can also imagine some manipulations to help understand the actual topology and architecture of these objects which is shown here. So you certainly have come across so-called exploded views that are quite often used in schematics of machines. Yeah, okay, it's coming. And so we actually can apply the same ideas to a molecule in order to examine for instance interfaces that would typically be hidden from view because of the packing. So in our case a natural way to split would be just by extending the subunits of the protein away from the axis. So here we can have a simple demo with the five sub-units moving apart. And so you can imagine for instance in virtual reality if your head would get very close to the molecule and it could naturally open up so you don't bump into it but you can actually see that the sub-units are moving away and the sub-units are moving away and the sub-units are moving away and the sub-units are moving away and if you open up so you don't bump into it but you can actually examine the hidden regions the hidden interfaces. So this example concludes the VR adaptation examples and that brings us to how we explore molecules in virtual reality. So the most immediate application for VR is actually to look at these invisible objects as if they were life-sized artifacts that you can touch and examine. So you can get a sense of crowdedness, spatial relations interactions and so on. So in UnityMol we implemented Visual Exploration Facility together with an industrial partner using Biopharma in the context of examining drug-binding sites. You can see here the controllers and the user hose in his hand that are also represented in VR and the menu system that you can interact with through a kind of laser pointer metaphor. And then you can change your range of visual parameters, displace the molecule or actually just walk around the object to get the view from all of the sites, simple manipulations. And you see again we added a scene here a room to have this reference where the bottom is where the top is and so on, not floating in three space. Again I already mentioned this, this is very single user centric, but let us look at some possible extensions. So this is a capture of a VR session on ArtSpace VR, using a specific extension called altpdb.info, written down here, first developed by Tom Skillman. And Tom and another colleague from the US, Nick Cramer, were kind enough to introduce me to this platform. And so here we had a sort of Skype in VR around my favorite molecule. And so I am explaining, I think I am the guy in yellow if I remember well, I am explaining some features of the molecule, so you can see the head movements of the avatar and also how the movement of the devices is transcribed. So you can actually to point at features of the molecule and then what you cannot hear, you can actually have an audio exchange with a colleague just like in Skype. So it is quite natural to use this to discuss some molecule scene. So we should be adding such functionality, you need a model over the coming months as well. Of course, static molecules have you seen so far, fun for a while, but then one may think also about modifying and animating them. And that is basically the next part I would like to talk about. So the idea is of course that you could interact with the models while they still behave in a chemically and physically realistic way. So first, a natural way to apply this is in the context, for instance, of integrated biology. Interactive refinement of structural models, for instance, using available experimental data is a very useful tool for model building. So by combining the experimental data with the human expertise and interactive manipulation of an underlying physical model, you can, for instance, fit models, molecules into socks or cryo-m envelopes. That's particularly useful because experimental data is ambiguous and allows for several solutions. So the interactive assessment by an expert user is extremely helpful in trying to find the most plausible models. So the physical model, we actually found that cost-print representations are very robust for such a purpose. And we actually used these approaches now routinely also for training our students that's shown here as an example. We have put together a contest where the students have the task to fold RNA molecules from an extended form. They do it through interactive modeling in UnityMall. You can see some screen capture here. Interestingly, we found that they use quite different strategies actually than if you compare it to automatic approaches such as replica exchange and differences. So far, we implemented this experiment with a simple mouse and 2D displays because that scales to classroom setup. And the students already did quite a good job on the folding, but we now would actually like to see whether adding VR immersion will enable them to fold even the trickier targets. To motivate the students, they can actually upload their suggested results to server, kind of contest server, and then they can compare to the fellow students to get closest to the known solution. And that makes them want to perform better and continue. So translating such an experiment to VR is, of course, as a crucial point, which is how do we interact with these objects in 3D. So that's where the VR controllers come into play. There's a range of controllers that you can use. With VR, here we just show the Vive and the Oculus ones. In UnityMall, we actually use an abstraction library to try and provide a similar look and feel for the different controller types. Of course, then you can display the molecule, grab and put on atoms, or you could also implement more complex deformations if that is needed. So let's look at how an interactive and dissemination looks in VR. So here you can see the VR implementation. We have just a small peptide. It's actually in a water box, but the water box is not shown, and that you can manipulate with a laser pointer. You can select an atom, pull on it, or eventually pull on two ends, then you can unfold it. And at the same time, you have a few plots that update live to give you information about the energy, total energy, hydrogen bonds, and things like that that can guide you into manipulation. So this is just proof of principle, of course, of education. And once you have that basic functionality working, then you can start improving and make the use experience better. For instance, the jiggling of the molecule in VR is not always very nice, so one might think about smoothing the thing or other effects like that. Another type of modeling that you can do in VR, for instance, could be docking. So here you can see some protein-protein docking implemented in UnityMall VR. So here you have kind of, you will attach the proteins to the VR devices. You can get an idea of the energy that's actually displayed here close to the device. And you can also see hydrogen bonds, or sometimes when there are clashes, they're kind of like red explosions. And that helps you actually in trying to find a good arrangement. And I mean, this you can, of course, extend to many, many other types, but I think these are some very, very standard ones. Now, let's try to go back to the bigger picture. Remember, I started with this visualization driven discovery cycle, and I said I will talk about these simulations. So then, of course, one immediately might think about analyzing trajectory, trajectories, and data derived from and the simulations. So here's just some idea of how such a new interface might look. In the future, the idea is to integrate many of the current tools that we have in a generic VR-aware contacts to be able, for instance, to analyze trajectories and data sets while forming hypothesis. So here this is kind of a non-VR example where you work on a display wall at my institute. We use a database that is queried and corresponding protein structures are displayed in 3D. So this one is not immersive, but it can be used with multiple users. And you can actually combine it with a VR exploration moving back and forth. So you could imagine a single user first explores the system in VR, identifies interesting features, selects viewpoints, and then later on place them back with a group in front of less immersive hardware such as a display wall. So here you have a small comparison of typically 100 features of such a display wall compared to a headset. So display wall typically has higher resolution, it's larger size, but of course it's less immersive. On the other hand, it can have more features than the other, but you have no head tracking, at least not in our installation. So yeah, there's kind of trade-off and you can imagine that you can vary the right hardware for the setting that you intend, whether you would be in a working session with thorough people or just individually want to explore something. And then we then have combined with a tool we did at the few years back for trajectory analysis called Vitamins. And that's a tool that allows you to look at several trajectories at the same time. So here, since we blocked again, okay, what happens to keynote? Let me try once more. Okay, here it comes. So here you can, you have kind of like a video recorder or a metaphor and then you can cycle through here five trajectories at the same time. And this one was looking at wetting, de-wetting transitions of that iron channel I showed you earlier. The interesting thing is that all these plots are interactive. And so when you select something in one plot, actually this has an incidence on what you see in the 3D view and the analysis, the subsequent analysis that you carry out. And so then we started to experience a transition of such a tool in the VR context, using it in a cave. So here you can see my colleague, Michael, manipulating a molecule in a cave with a large haptic arm. This is the haptic arm that can move through the whole cave. And you can action the menus I just showed you in the Vitamins slide through a tablet, actually, moving between the 2D and the VR context, which I mean is not ideal because class and data analysis typically are tricky to implement in VR, but using an artifact such as a tablet makes this workable. And therefore we have also tried to improve that by designing an assisted tool, for instance, with using semantic links. So that's illustrated here. So here you would first basically define using ontologies, what a molecule is, what the analyzed data looks like, and how both are linked. And then you could do something, or we did use something like voice recognition that can trigger actions because they can be interpreted using these ontologies. For instance, selecting some molecules from the graph here, and then perform some further selections in 3D, and then update the analysis based on the selection you just did. And things like this. So this was an early prototype. And so we recently started to try and apply this to a research question, for instance, about the redox biology of a green algae called clammidomonas rhinology, where we actually modeled the full proteome of that algae to be able to make some structural interpretations to understand how these proteomic structures may explain the redox modifications of these things that occur in that system. So we actually would like to be able to predict such redox modifications from the structures, and then be able to understand that network. So for that purpose, we have been working on an immersive data mining tool as an extension to Unitymore. And so here you can see the underlying proteomic database on the left, which would just be projected through a web browser onto a plane into VR, that you can manipulate with a laser pointer metaphor. And on the right, you have the proteome network that you can also examine. We can then manipulate these abstract data. As you can see here, the network is manipulated, or here just experimental data that is abstracted. And what we want to understand is how certain redox modifications cluster in that spatial projection that we can choose. Different choices that we can have. So the first task in such a study would be to query the underlying database. Here you have an example of typical steps one would use. So you choose some data set here, nitrocellations that were observed experimentally on that proteome. Then we have made a further selection on the systems that are typically modified to look at the buried ones, and then try to see whether they have any particular secondary structure. And combining that with the visual inspection, we came up with a hypothesis that that may be linked to a specific fold of the proteins when the system is buried, so-called Rosman fold. And that we can then compare to a control data set that has other modifications than nitrocellations. Yeah, so here then you can click on one of the selected data points and have the view in 3D. And this is what the user would see here on one side, the database view on the other side of 3D view. So this is just requiring the filtering clauses. I just said how do we arrive on the data that we finally analyzed. And so afterwards you can check whether that hypothesis would be true by looking at the statistics. If you compare exposed systems to buried systems, you can see that there's indeed much higher propensity of data sheets in the buried systems. That seems at least at a very cost level to verify that that observation. So now how do we link this to VR? Actually, as I mentioned, this database and data mining tool is web-based. And so we then added a web VR feature to Unitymall. And so then on the web page, you have kind of an icon like this. And if you click on this, it would open any scene in a compatible web VR device. For instance, the head monitor display. So that was actually the last example I wanted to provide in order to hopefully get you interested in using VR approaches for your research. And so with that, I would like to thank you very much for your attention. And of course, also thank all the collaborators and contributors. You can see here a selection of many people, as well as the funding agencies that you have down here. And so I'm happy to take questions. And I think we can start the answer session. Thank you, Mark. So I will encourage everybody to use the Q&A Mark, can you show the last slide, please? Yes, yeah. Please use the control panel or go to webinar. There is a tab for questions where you can type it. And I will let you speak directly. I have a question, maybe some of our listeners are wondering whether the software is available to use and what is the roadmap, what is the license? Yeah, so right now, what we do is whenever a publication comes out, so we're about to submit a paper on the VR version, we release the source code that goes with it. And then while we work typically on the next version, we can provide some access on request and to the source code. And we always provide bills for whoever wants to use them. Thanks. The code is open source. And then people can reuse it in their project in any way that they like. Yeah, actually, I just saw that Jason has asked a similar question. I will try to connect with him. Hi, Jason. Can you hear us? Maybe he's... Yes, Jason wrote that his audio is not working. Anyway, he was... Jason Evans was also asking whether the software is open source and it will be exciting for people to get a copy of it and start playing. Yeah, so just in addition to that, so right now everything is open source and because we, as I mentioned, also work with several industrial partners, we're now trying to maybe enrich the licensing model so that people could make extensions like plugins or so that have a different license, which might be easy for them. But the core of the software will remain open source. How does it compare? Are there other similar solutions and software? Have you done some comparison with similar types of codes? Well, I think the difference might be that, I mean, you have several major visualization software around, VMD, Chimera and so on. And they all, I think, work or look at what VR can bring. And they have been around for a long time. I think we maybe had some advantages in the sense that UnityMod was a younger project. So we could really, I think, change the offer to be optimized for the VR content. So because there was less... So I think it's really, now the main developing direction is the VR version of UnityMod. It's really focused on that. And are there any special requirements for the hardware to be used if people start playing? The graphics card is very, very important. So we do run it on laptops, but typically we recommend something equivalent to a G4 GTX 1280 to have a good experience. If you have a smaller graphics card, then you get some limitations. Yes. So we have another question from Jonas Bostrom. I will let him speak. Jonas, can we hear each other? Can you hear me? Yes. Yeah. Yeah. So I think VR is extremely cool. And I wonder how you would convince conservative users saying the same. I mean, you can do all these things with traditional tools. Yeah. I mean, I think it's not a question of convincing. I think it's just once it is there and it works, then why not use it? I'd say if when people moved from black and white to color television, right? I think there are many few things we don't need to color and it's just gimmick or so. But once you have it, why would you not use it? Yes. Any other questions from the audience? Yeah. We don't have other questions asked. So I guess it's maybe some time next year when we have already full release and when we get to the stationings. Also easier for people to develop extensions. We can do a follow-up of the webinar. It would be very interesting. I'm looking forward to see more application of the tool. Yeah. Don't hesitate to contact me if you have any further questions also after webinar if you want to demo build or whatever. Please do so. So for everybody who is listening and recording this, feel free to get the treat mark to copy. Okay. Well, if there are no other questions, we can finish with that today's presentation. And I would like to let everybody know that we have two more webinars scheduled coming up in the next couple of weeks. So next week we are having Kaitlin Ballant who will present the Smirno format and Fossil format, which is part of the open Fossil initiative. It will be next week on the 10th of October from 4 o'clock Central European summer time. And then the first week of November on the 7th, we will have a presentation by Daniel Letzi from the Supercomputing Center, who will present the Comcess framework for development of parallel applications. This is one of the frameworks that we use extensively also in BioExcel, and it's very flexible. It's been extensively used for developing a very powerful workflow. So, everybody's welcome to join and please subscribe to our main list where we send monthly highlights from what's happening in BioExcel. We send reminders about upcoming webinars that might be of interest for everybody. So with that, I'd like to thank again Mark and see you again. Thank you very much. Bye. Bye-bye.