 Felly, gyddech chi gydag yr hyn yn ymwneud i yw yma, y bwysig y bwysig yma. Yr cyffredin y yw'r rhwanfer cyllid a'r bwysig eu ffan o'r bwysig. Maen nhw'n ymddianeth Adam Carter. Rydyn ni'n dod nad yw'n gweithio fy ngell, mynd i fi'n ymddech yn gweithio'r siwr llinscol a'r gweithio sydd ar y cyffredin! Ac o'r rhai newid y Maen nhw'nno, Mark Ronddaig, o'r ffordd i chi i gingell mewn safrwyth mwyageru am MD. Felly, ydych chi nawr ydych chi'n meddwl, rydyn ni'n rhaid i'n meddwl i'r wyf yn ysgrifennu. Rydych chi'n meddwl i'r cyfnodol, gallwn cyfnodol yn gweithio'r cyfnodol. Yn y cyfnodol, mae'n meddwl i'r ysgrifennu yn y YouTube ac y web ysgrifennu ysgrifennu. A'i gydwch chi'n meddwl i'ch gael ar y gyrdd o'i meddwl o'r web yn ysgrifennu, ychydig i'n meddwl i'n meddwl i'r web. Felly, i fy nghymru ychydig, matech yn bwynydydd yma eich ddweud, ac mae'r ddweud 19. Felly, yma ond mae'r ddweud 19 yn bwynydydd yma, mae'n dod yn ddu defnyddio yn y ddiolch, ac rydyn ni'n ddweud ar hyn o'r dweud sydd ddweud yma. Mae'r d hyd yn ymddannu oherwydd arfer y cyfroedd bwysig o sgol bwysig. You may have heard of Gromax, Haddock, CPMD, and they are the three main codes that the project is working on, and we have some of the lead developers from these codes in the project itself. The second strand of the project is about usability, and so one thing that's very important to us as well as having codes that perform well in theory is to allow them to be used well as part of scientific workflows. So we're looking at various different workflow platforms and how these can be used to increase the usability of the codes that are listed above them. And the final part of the project is consultancy and training, working with end-users, and that's one of these webinars that are part of that aspect of the project. So we want to introduce you to some of the work that's going on in BioExcel and also related work of interest, like what we're going to hear about today with MDStudio. One thing that I want to bring to your attention are BioExcel's interest groups. So if you are interested in any of these things, which I expect you will be interested in at least one, if you're here today listening to this webinar, then I'd invite you to join one of BioExcel's interest groups. You can do that by going to the main BioExcel webpage at bioexcel.eu and look for interest groups there. And another thing to bring to your attention is an event that we will be running soon in Amsterdam on the 22nd to the 23rd. This is one of the face-to-face meetings that are organised for these interest groups that I mentioned on the last slide. It's designed to be a networking event and there's a programme of small interactive working groups aligned with the BioExcel interest groups. So the different interest groups will be meeting in different parts of this building and the sessions are designed so that people can walk between different interest groups and find out the different things that are going on related to BioExcel. It's a free event and so you're welcome to attend. If you'd like to find out more, you can go again to the main BioExcel webpage and look for the community forum. So before I introduce today's speaker, just to let you know that we'll have a question and answer session at the end of the webinar. We should have plenty of time for questions. And the best thing to do if you have a question to ask is to type it into the question tool in GoToWebinar. So you'll see something in your control panel. Yours will look slightly different from this, but it's the same idea. There should be a section called Questions. If you click on there to expand it, there'll be a place for you to type your question. And at the end, I can either open your microphone if you have one and you can ask your question directly to Mark, or I can pass on the question if you type it into the question window. And if you're watching this webinar later online, you can also ask questions on the BioExcel forum at ask.bioexcel.eu and we will pick them up and redirect them back to our speaker today. If we need his input to answer your question. OK, so we'll save the questions to the end. But if you're welcome to type in your questions as you go along so that they are for us to look at when we come to the end. So now I'd like to present to you Mark van Dijk. He studied biology at Utrecht University and obtained his master's degree in biomolecular sciences with a specialisation in structural biology in 2005. His PhD research at the NMR department of the Bivoot Centre for structural biology at Utrecht was completed in 2010 on the topic of protein DNA interaction modelling using molecular simulation techniques. Since then he's been involved in various research projects as a postdoc at Cambridge and then Utrecht and VU University Amsterdam where he now holds a senior scientist position. His research focuses on the study of biomolecular interactions using computational molecular modelling and simulation and he's currently involved in the Dutch eScience Centre project Enhancing Protein Drug Binding Prediction where he's working on method development for accurate protein drug binding affinity prediction using free energy methods. Part of this project is the MD Studio software platform for microservice-based MD workflows. Mark is the project coordinator and lead developer and he's going to be telling us a little bit about MD Studio today. So I will now hand over to Mark if you're ready. I'll invite you to share your screen and you can take it from here. So if I just make you the presenter you should be able to share your screen now. That's great. Is it visible? Yeah, that's great Mark, can you go? Good. First of all Adam, thank you for the kind introduction. I'd like to thank BioXel for giving me the opportunity to present today in a webinar, I'm very excited. Today in the coming 30 minutes or so I would like to talk to you about the concept of using microservice architectures as basis for computational workflows. In particular our vision of using this concept for molecular dynamic simulations and modelling routines in the MD Studio software platform that we are currently developing. First thing first, I would like to shortly introduce the computational molecular toxicology group that we are currently working at the Free University of Amsterdam led by Daan Geirke. Our research group has a strong interest in using computational simulation and modelling techniques for the rationalisation and prediction of drug interactions and metabolism. We have three focus points in the group. The first is force field optimization and MD methodology development. Most of that are the development of polarisable force fields and topology parameter optimization for small molecules. That is work that we do in collaboration with the group of Ellen Mark at the University of Queensland in Australia. That's the home of the automated topology builder server, a web server. In addition we use various free energy methods for our study including we have interest in using Mark of State models to describe dynamics and interactions of the systems that we study. Last but not least, we have a focus on making sure that the methods and principles that we develop are also applied in a practical setting. We do that in collaboration with a number of academic and industrial partners. A good example of that is our recent collaboration with the industry in the European and ETOX project where we developed ETOX LI automated workflow that aims in using the linear interaction energy-based principles for predicting binding affinity of small molecules to targets, usually drug targets, or to off targets such as the cytigron P450 family that is responsible for metabolizing and thus detoxifying several xenobiotics. This particular product we are currently further developing, we want to extend the capabilities of it and that we do in collaboration with the Dutch East Science Centre on the project that Adam just mentioned. That also gave rise to the start of our project called MD Studio as a basis to further develop flexible workflows to run molecular dynamics simulation and modelling techniques. When I talk about automation and workflows, what is it exactly that I'm talking about? First of all workflows and automation have been around ever since computers were invented basically. A good example of that is the initiation of the UNIX platform with the brilliant invention of standard input, standard output and pipes which is an excellent example of automation. When that point on until present day, thanks to technological advances in hardware and software and the general acceptance of East Science, we have seen a big explosion of all kinds of computational automation and workflow management tools for both academia and industry. I listed only a few here and the list is very long. The many products that now are on the market indicates that there is a wide interest in the development and use of these techniques and that basically should be a solution for anyone to use. When we were thinking about extending the capabilities of our ETOC's allies workflow, we had a close look at all of these different products and we set up a list of things that we believe are important when considering workflow managers. We came up with four key aspects that East Science workflows generally put a focus on. The first is obviously the focus of the tool itself. It can only be automation of repetitive tasks or process logic, speed-off of CPU intensive tasks, parallelisation, provenance, making sure that who produces what, how and where and when is recorded and authorized, which is important for reproducing workflows and making sure they are liable. And of course for innovation, making use of workflows as a tool for fast prototyping of new methods. The second aspect is components. What are we trying to optimise or embed into workflows? Very often these are applications or executables that are linked together into workflows. We also include web services that are operated remotely and that we want to use in a workflow chain together, or libraries that perform certain functions. In almost all cases, definitely in East Science, these workflows are very data intensive. So the data aspects, the data manipulation, the use of database itself and the important aspects of these workflows. Further all, the tools that are developed, they work on workflows in various abstraction levels. These can be very simple from command lines or interactive views onto a script-based way of building workflows to more structured principles, so workflows as files where we formalise workflows, a language or a pattern. Usually embedded in sort of a graph-like approach, so the acyclic graphs. And finally, the various tools that provide the ability to make workflows in a rich graphical user interface. And last but not least, particularly for using automation, the backend is very important. These workflows are usually linked to various high-performance computer clusters, or cloud environments that run virtual machines, or docker containers, grid-like systems, or to basic workstations in the research environment itself. And they also include specialised backends such as TPU nodes. So considering these four aspects and things that they relate to, we went back to our ETOX Allies workflow. Our ETOX workflow is an integrated workflow that uses various simulation and modelling techniques to perform binary infinity prediction. And in short, the task that are done is that there is a topology being produced for the small ligands that we want to make the binary infinity prediction for. Those ligands are docked in the active side of the target that we're looking at. For instance, the cytogrone proteins. And we dock because we want to capture the various orientations that the ligands may have in the active side of the protein. And we perform a clustering of the docking results, and for the top five to eight docking poses, we perform an empty simulation. A short empty simulation usually one to nanoseconds. And we extract the non-bonnet energies, fundamentals and aesthetics from those trajectories, and combine them in a final data analysis workflow where we do the prediction. And this type of workflow basically captures most of the four key aspects that I've written about. So we're there for automation. We definitely want to speed up the docking part, but certainly the molecular dynamics part. We usually link together a couple of applications, docking applications, molecular dynamics, topology generation and parameterization applications. We are reasonably heavy data driven. We use databases to store data in and we manipulate a lot of data. We interest it in various levels of abstraction. First of all, for fast prototyping, we made use scripts, but to offer the workflow as a user-friendly end product to the user, we may go all the way to a graphical user interface to make it intuitive to work with the workflow itself. And last but not least, there are several steps in this workflow that require a certain amount of interactivity. For the docking stage, before you go into molecular dynamics, you may want to inspect the cluster that you get from docking to make sure that the desired poses meet your requirements before you spend a long MD simulation on perhaps simulating the wrong poses. And last but not least, the LIU prediction itself. LIU is a method that requires a model to be generated for the predictions and modeling or making a model for a series of test compounds, a pretty interactive process, and that interactivity would be good to implement in the workflow itself as well. Taking these requirements, we looked again at the options that are currently available and we decided that it may be best to start looking into a little bit of a different way of building workflows, and that's by using microservices. So microservices are an architecture that belongs to the family of service-oriented architectures. And there's not one definition that describes these type of services, but one that comes close to it, is that microservices are loosely coupled, collaborating services that are independently developed and deployed, and they communicate with one another asynchronously event-driven over a network. Microservices themselves have been around for quite a number of years, but they have recently gained a lot of attention because they are the driving architecture behind the Internet of Things approach. So for building workflows, microservice architectures have six major benefits. The first of this is that they are naturally modular in architecture and that enforced modularity provides a solid basis for workflow-centric applications. Second, they are autonomous specialists, and it means that microservices themselves only need to do one thing and they need to do it right and reliable, and with that they require a little knowledge of the system as a whole. Microservice are naturally polyglots ready, which means that they can be implemented using a different programming language or database, hardware or software environment, and that makes it very easy to deploy various different types of very digital type of solution at microservice in a single platform. They promote continuous software development and delivery, and that means that the chains of a small or a part of the application only requires one or a small number of services to be rebuilt and redeployed. And when redeployed, the entire system as a whole doesn't need to be taken offline, but only the modules that need to be redeployed can redeploy it dynamically in the running environment. Microservices natively support scale up and scale out. It means that they can be deployed on remote hardware that is specialized for high performance computers, so they can be linked to a scheduling service of an HPC resource, but also in a more simple way on multicore workstations, when you launch multiple instances of the same microservice, you get native scale up of your performance. Last but not least, microservices are always live within the architecture that they are running, and that promotes an interactive real-life use of the microservice ecosystem itself. These six benefits is what we use at the basis of our MD Studio software platform that uses microservices for molecular dynamic and simulation workflows. So I want to go a little bit more in detail about what the core architecture, the microservice architecture of MD Studio is, what's the type of solution that we're using and how we deploy them. First of all, the microservice architecture that we use is broker-based, and that means that microservice communicate with one another via an intermediate and that's a broker. And that broker allows for asynchronous communication, and we go for an open source project that's called Crossbar, and Crossbar is a WAMP enterprise ready broker. It's pretty feature-risk, it's fast, it's scalable, and it's promised to be able to hold hundreds of thousands of connections simultaneously and handle tens of thousands of matches per second, which would be more than sufficient for the environment that we are going to use it for. As I said, communication is WAMP-based, which stands for Web Application Messaging Protocol. And that is a protocol that offers sort of full duplex communication over TCP, it's a web socket-based communication that was developed to sort of alleviate the limitations of the current HTTP protocol that is doing data-intensive communication between a server and a browser, for instance, by having a specific port that is continuously open in duplex modes. The good thing is that it's not restricted to a server browser environment, but basically it's an open standard that allows communication over any sort of transport. It's standardized by the sense that it's approved by the IETF and the W3C consortium, and it's implemented within all major programming languages so far. So WAMP itself is a protocol that sort of combines two-stop protocols into one whole. The first is RRPC, which is Routed Remote Procedure Calls, and the second is PubSub, this publication subscribe. And RRPC is a well-known older method in the sense that it allows for communication between a caller and a callee. So the caller sends data to the callee, asks it to perform operation or something, and the callee when ready sends data back. And the fact that there is a dealer in between it, a crossbar in this case, and seriously we can do this in an asynchronous fashion. So the dealer accepts the results from the callee and passes it along to the caller. In doing so, the caller first gets an object called a promise, which is an abstract object that sort of holds the results that gets yielded in a later time period. In the meantime, the caller can do various other activities up to the point where the data becomes available. PubLocation subscribe is a slightly different way in which a subscriber can subscribe methods to the broker, saying if I have data for this method, I publish it to the broker. A publisher can do that, and a subscriber subscribes all the messages that it is interested in at the broker, and the broker will pass all those messages to all the subscribers once they become ready. That is sort of a one-to-many communication paradigm. And these two combined provide a very nice architecture to do dynamic real-time communication between the various microservices. So we use this concept crossbar as a broker and WAMP as a messaging protocol as the basis of MD Studio. In short, an example of what that would look like in a schematic manner is depicted here, where in the center we have our MD Studio application, the broker crossbar, connected to it, various microservices that perform tasks. We have a docking microservice that provides topology features, molecular dynamics and, for instance, a database to store results. So, in the middle again, we have the crossbar broker. Crossbar itself, I said, is feature-rich, and it provides a number of features out of the box that are very important in architecture like this, such as authentication and authorization and security options. We have the ability to provide fine-grained control over authentication, role-based usually, sort of isolate the users or decide which user has access to which microservice and how, and security for making sure that all of the communication is always in encrypted fashion. The different microservices themselves operate within the architecture by wrapping them in the very small WAMP API layer that sort of provides a formal description of the functions that the microservice offers to the architecture as a whole. That's a standardized protocol. It's currently available in 13 different languages, which makes sure that no matter what the application language is and what the application is written, most of the time there is a WAMP API available to make that application work in microservice architecture. The communication that we use is JSON-based with schema support, which means that we use JSON schema as a way to formalize the type and the layout of the API and the data that has been passed along between microservice, and it gives us the ability to validate it and to quite things like backwards compatibility and version. So the microservice themselves, they run as individual processes within the whole, so they're always live. They can make use of the CPUs that are available in the environment where the system as a whole runs. But they can also run on different types of hardware resources like dedicated clusters where high-performance computing is, or where specific databases are located, or where the data itself is. We can make sure that microservice live close to the data or close to the metal in case of hardware requirements. We provide specialized dock containers to make deploying these microservices easy by means of a docker. Having said that, in a microservice basically everything, in a microservice architecture, in the studio basically everything operates as a microserver. And it also means that the user that is eventually interacting with this architecture by itself also is a microservice. So the user interacts with it throughout the WAMP API and therefore is a microservice in the whole. And the benefit of doing like this is that we can offer the user different ways of interacting with the microservices or using it to chain it together into workflows. And its basic level would be immediate, real-time and interactive communication with the microservice and with the functions. So launching a Python interactive shell or doing any interactive modes, directly sending commands to the functions in the microservices and getting data back. That's a really powerful way of fastly working with the diverse functions that are available in the structure and making use of the HPC backends that are in it. And we are currently looking in ways for extending the capabilities of this by, for instance, injecting interactive sessions like the very well-known Jupiter notebooks right into the WAMP session. One step up would be to write small scripts in any of the 13 languages that are provided, calling the different microservice functions and chaining them together to make a functional whole. We also launched, we also built a microservice that is dedicated to build workflows in this type of architecture. So that is a more structured way of chaining them together, which is good for ensuring the provenance of the entire system. And that sort of functions like much of the other types of workflows that are currently on the market and providing it sort of like a graph representation of the work that are chained together, making sure that data is passed on from one functional microservice into another until the workflow is finished. And last but not least, there is also a WAMP API available for JavaScript and we use this feature to generate web-based or browser-based graphical user interfaces that interact with the microservice architecture. So most of the abstraction layers that are available in workflow managers are available within this one architecture, simply by using different ways of communicating with it, interacting with it. Apart from having specialized microservice that do things like MD and docking, it's always useful to have general microservices that provide common functionality that people are used to in applications and may want to use in workflows and under sessions. Examples of that are databases, so we have a microservice abstraction to a MongoDB database. It basically allows for all the microservices that want to store data to have a common interface to store data and retrieve data from it. Same is true from a logging service, so we have a structure logging services that provide a centre point for logging in the entire infrastructure, even if it's distributed over different machines in the environment. Use a management, of course, and specialized services that provide adapters to different computational infrastructures, such as adapters that can communicate with a queuing system clusters or specific nodes, GPU environments and such. Having said that, what would a microservice architecture look like in our ETOX Ally's workflow? In this graph that I show here is a workflow description of ETOX Ally's, where all the different swears are, in this case, microservices that live in our architecture, that are chained together to perform the eventual function of predicting binding affinity for a small ligand for a protein. It means that the workflow, in this case, accepts protein input structure and a ligand input structure, and we offer the ability to provide that as a file or extracted from a web service, extracted from a database. It's fed into the workflow, and the ligand usually is defined as things like a smile string or a 2D representation, so there's a little component in one of the microservices that has a task to converge this to a 3D representation of the ligand, perform things like protonation and charge definition. Then, of course, we want to be able to automatically define topologies for this system. We have various ways of doing that. One of the ways is to use the well-known program AC pipe that uses Ember tools to create topologies on the fly. But the other option that we have is to use the automated topology builder without collaboration with LMRC, and we also have a microservice to create a bit more high quality way of generating topologies. Depending on the choice, both of these can be fed into the final MD stages. As I mentioned before, we dock the ligand in the active site of the protein question, in this case, for us often, the cytogrom as a protein. The function is using a fast docking method called plants that is also available as a microservice. It's a fast docking method in a sense that it can create general acceptable confirmations of the ligand poses in the active site in better seconds, but we may want to switch plans for better docking methods or different docking methods. An example would be HADOC, one of the partners in the BioXL community, for which we also collaborate to offer their functionality as a microservice within our ecosystem to do the docking in a sense. After the docking, we do a clustering and we derive various confirmations of the poses of the ligand in the active site and fed that into short molecular dynamics. We have in the workflow system that we define the sort of dynamic way to spawn new molecular dynamic instances for all the confirmations that come out of a docking stage in the dynamic system and pass them along to the MD microservice that has high performance computation backends to perform it in an efficient manner. That data then comes back and is after a filter stage to extract stable regions in the trajectory and extract the non-bonded electrostatic and van der Waals parameters. That data is fed into the final stage, which performs the actual LIE prediction. That particular microservice itself is an embedded workflow. There are several data processing and data manipulation stages involved in that. We have the ability to embed common or different workflows as a workflow in a larger workflow. Embedding in this case makes it easier to reuse general workflows in a larger workflow. This type of workflow has two stages in it that may require certain attention from the user in an interactive way. One way would be the plant docking stage to make sure that all the confirmations that microservice yields make sense before spending computational intensive MD stages on them. We allow in our workflow methods to define simple breakpoints at various stages in the workflow. The user is notified once that breakpoint is entered to review the results that come out of the microservice, make changes if needed, and then continue on. Those breakpoints only have influence on the part where that particular microservice is involved in. There are other routes in the workflow that do not involve that breakpoints, they continue on. Another interactivity point would be the final stage, the LIE prediction, particularly in the case of modelling, which is a pretty interactive procedure where a lot of chemical intuition of the user is required to make a general model and that interactivity is also allowed in this workflow. I really mentioned a little bit about the different abstraction layer and our ability to also provide graphical user interfaces that operate with the molecular within the microservice architecture. It's currently something that we're not actively developing, but we've ventured into trying to see how that would operate and what the benefits of it are by providing an interface to one of the steps in the ETOX Ally's workflow and that is the filtering after the MD stage. That filtering step is important because we are interested to make sure that only stable reaches into trajectory are used to extract average values for van de Waas and that static component from it. We make sure that those regions, those stable regions, are actually selected. We have automated procedures to go over the molecular dynamics trajectories and to isolate stable trajectories and does a very well job, a pretty good job, but it may fail from time to time and there is a good reason to go over this in an interactive fashion to make sure that the final selected regions are indeed stable. This is an interface that allows you to do that. It's built using Angular, which is a JavaScript framework developed by Google to make rich browser-based graphical user interfaces. This thing entirely lives in the browser, runs in the browser, and using the JavaScript WAMP API connects to the microservice environment, particularly to the MD component in there, fetches all the information from the trajectory and displays them in a graph like this, where you see the trajectory in the van de Waas and static values that are along that. You can select them and then interactively use sliders to reselect a certain region and then once saving it will be communicated back to the workflow that we're running to recompute binding infinities based on the new information that's available. This provides an interactive way of working with the workflow and ensuring that new information is fed to the workflow to update the data that runs out. In conclusion, what I hope to show you is that microservices provide quite a rich environment to build workflows upon and that we use that in our vision to use microservices for simulation and modelling workflows in MD Studio. In short, MD Studio in the long run has sort of improved a couple of very important benefits. First of all, MD Studio itself is a deploy anywhere software itself contained, both the broker crossbar as well as all the microservices that live in the architecture. They can be set up as one application that runs on someone's personal workstation, laptop. It doesn't necessarily have to be connected to the internet. Or it can be deployed as a central MD Studio server on a dedicated workstation in a research department, for instance, allowing others to connect to that infrastructure to work with the microservices, launch new workflows in various abstraction layers that I have showed you. Because you can deploy it flexibly, you'll be able to build your own application environment where you pull together all the resources that you have available, like the clusters to run computational heavy computations, particular services that run databases and the individual workstations of the researchers that may want to interact with the environment in a very flexible way. It's suitable to build applications using the environment that you have at hand. Because of the various ways you can interact with a microservice environment, a researcher can basically work with it in the way that they like, either interactive script-based using the workflow environment or even with the graphical user interface available. Of course, MD Studio by itself is multi-user ready. We also have the ability to provide the group stability to collaborate together by providing an environment, role-based, group-based, to communicate with the same microservice, the same workflow, to share results and work on the same project simultaneously. That's a vision on all of that together. We hope MD Studio will grow. At the moment we are in active development to provide a prototype that showcases the functionality of MD Studio based on our ETOX Alliance workflow. From that point on, we will extend in the workflow to incorporate new microservices. We certainly hope that other users in the community that have very valuable methods that they want to contribute to microservices will be able to do so. The WAP layer requires to, as a wrapper around the functionality to have it operate in microservices, can be deployed as easily and straightforward as possible. As such, we hope that the ecosystem will grow in time and be beneficial to many in the simulation and modelling environment, but certainly also outside because the basic architecture, as it is right now, is not limited to simulation and modelling only. Having said that, I would like to thank all the people involved in the development of MD Studio, so that's the people from our own group, Dan Geerke, Cwm Fysser, Paul Fysser, the people from the Science Centre, Lauren Svein, and the collaborators, for instance, the group of Ellen Mark and the cords from the ATB server. Finally, I would like to draw your attention that we will also be present at the BioXL community forum in November, and we certainly hope to have that prototype ready by then, so we can give you a demonstration and you can get a hands-on of how things work and we can hopefully draw your attention to collaborate with us and perhaps contribute your own microservice to the environment. With that, I would like to thank you for your attention and hand over our controls back to Adam. Thank you very much, Mark, for that interesting talk. So, I can see that the questions have started to come in. So, if you do have a question, please just type it into the question box and go to webinar and then I will go through them one at a time. I do have a couple of questions myself, but I'll start by taking the questions that are coming in from the floor. So, Zara, you have a question there. Do you have a microphone? If you want to unmute yourself and ask your question directly to Mark. It's the little, I think you can just push on the little orange microphone to unmute yourself. Maybe I can unmute you. If you don't have a microphone, then I can read out your question. Can you talk directly, Zara? Yes, I've been unmuted. I can unmute myself. Okay, thank you. Loud and clear. That was really great. Thank you. You know, this is definitely, I think, what the communities are looking towards how to make, you know, software available and how you can have workflows where you can bring in different methods as and when you need. And I know in BioExcel there's a lot of groups that are involved in this and will BioExcel be encouraging the partners in the consortium to make the methods available through microservices. And also, so that's one question, then looking ahead, how would you envisage people looking up microservices? Because, I guess, you know, you can imagine in the future there would be quite a lot to choose from. Will there be some kind of directory where people can look at what's available? Do you want to take the second one? Yeah, I mean I'll go back to BioExcel. Yeah, I'll first answer the second question or leave the first one to Adam. First of all, we hope that others will indeed contribute their methods and one of the ways that we try to do this is as soon as we have a good prototype is to set up a website where you can also download the environments to start playing with yourself. But we hope to also provide a common database or directory for people to upload their microservices or provide download locations for their microservices so that others can quickly install that in their environment to start using it. So, see, there's a sort of marketplace as you will for sharing these environments. And I can't help but draw parallels with Pipeline Pilot. So, for instance, with that you can share workflows. So if you've optimised a workflow, maybe published it and people say I want to use that, so you can make that available. And then also you can make components available. So you can look up two different parts. The components here would be the microservices. So if it's possible to somehow make the microservices and these overall workflows available, that would be great. Yeah, definitely. And certainly also for the workflows themselves because that's where a lot of logic goes in and it's very valuable to share those with others. So definitely agree with that. OK, thank you. Thank you very much, Sarah. And just to quickly take the first part of your question, will BioXL be encouraging partners to make their services available? I think it was already mentioned that HADAC is one of the microservices that has already been, one of the codes has already been looked at in this context. So there's at least one code that we're looking at how might work in this way. And there's also more general work going on in BioXL in terms of workflows and wrapping components from workflows so that they can be used in different contexts. And I think that these kind of microservices could be included in that potentially. And by the end of the BioXL project, we will have our own catalogue of some of the services that can be used. I think that's being done in conjunction with Elixir. I think I've spotted that one of my BioXL colleagues is in the room, Adam. I don't know if you've got a microphone. You might want to comment on that in a minute just to put you on the spot. But I don't know whether that's something that... It seems to me that in principle, there's no reason why these components can be made available alongside some of the other workflow components that we're looking at in the project. That would be great. Great. Thank you for that, Zara. Steen, do you want to ask your question next? And then I could move on from there. Steen, do you have a microphone? I'll have to maybe make you... That's the second. I will unmute you. Okay, Steen, you should be able to answer your... ask your question directly now. Thank you. Yeah, I think it's a very interesting approach. And I like the little technical deep into how... I saw a very full person to see how that works as well with the web socket and microservices. Now, just to comment on the previous questions, I think we also should look at coordinating with our efforts on tool descriptions and tool packaging, like biocontainers and bioconda, because this is kind of taking it to the next level, where there's not just some binary you will know, but it is something that is active that you can talk to and put together. So definitely lots of interesting things to that. Now, my question is just a bit more practical, because if people are going to do this, the first question is how do I put my favorite tool into it? So you mentioned you have to have some kind of wrapper for the web socket application messaging protocol. Is that something that's easy to do? Is there some kind of tools to help you? Or are you kind of like a lift bit on your own when you want to do that? No, not at all. So our broker is crossbar, and within that consortium of developers are also developers that make the WAMP wrapper. It's a project called AutoBahn, and that is the community that develops these in 13 different languages, provide documentation and very good examples of how to use those libraries in these 13 different languages to build your own microservice using that API. That provides a very good start point for coding them yourself. In addition, we also could think about providing a sort of common wrapper where you could, for instance, wrap executables or command line libraries in an easy way in, so you don't have to worry too much about the technical details of the WAMP communication, but simply provide a list of functions that you want to expose and they are exposed. So you can kind of change a command line to a function if you want, although it might not be as performant. It might not be as performant, but that might be one of the ways to go to quickly expose. We don't have that ability yet, but it could be a way to do it. Thank you. Okay, thank you, Steve. Thank you, Mark. Do we have any other questions from the floor? Feel free to type them in. If you do, well, people are doing that. I did have, well, my first question was similar to Steans, actually, was how would you actually wrap something up to use it as a component? So my second one then is sort of a bit more general. If I wanted to experiment more with this and try it out myself, what's the best place to start? Are there examples of things that can be downloaded or what would I need in order to get started with wrapping a component? Or, indeed, as an end user, is there anything that could be downloaded yet to try out how it works or is it still at a prototype stage? Yeah, a very good question. We are still at a prototype stage and, therefore, I do not have a website or download location yet that I can share with you. So that's a pity on one cent, bearing with me to get that done. But, as I said, we will be at the BioXR community and we definitely hope to have that prototype ready by then. Maybe not with the full featured website. You can have all listing, but definitely with the download location that you can download the package and start playing with it. And we will make sure that that package then includes a couple of basic microservices that you can use in a functional whole to get a feel how it works and start playing with it yourself. I think about the packages itself. It sort of operates as a single application that comes with a sort of setup file that launches the crossbar router and a number of other microservices in one go. So it sets up the environment automatically. So it's more to give you the feeling that you are working with a solid application rather than a heterogeneous, loosely coupled set of microservices. And that allows you, I think, to quickly get up and running with using the framework. So hopefully in November I can share download location and show it to you. Go forward to that. That's great. Thank you very much, Mark. One final then question from Lee unless we've got any other questions from the floor. One of the things that it was mentioned on one of your slides, you mentioned HPC adapters. I just wondered what they were and whether that's a specific feature that's something to do with high-performance computing or very large computers, which are, of course, interested by Excel. Well, microservices themselves are not... I'll try to elaborate on that a little bit more. So microservices themselves don't have a native focus on offering high-performance compute capabilities. I mean, they are a convenient way of linking together functionality in a flexible, more or less interactive way. But, obviously, HPC is a component of it, definitely if you want to use it for molecular dynamics simulation. And therefore we want to make sure that when we deploy an MD microservice that has access to some kind of high-performance backend in any form or whatever it may be. And that can be pretty heterogeneous because people have access to local clusters, to national compute facilities, to GPU clusters, to... Now, name it and that's it. So providing one solution won't be enough and therefore going for a sort of adapter-like fashion where you have adapters that can communicate with queuing systems, with cloud infrastructures, with grids, provide a way of linking, offering that one microservice, the MD microservice, for instance, the ability to spawn nodes on these various types of infrastructures. That's what I actually mean with HPC adapters. Okay, that's useful. You mentioned a few words there, the batch system, which I was wondering about because that's always something that people have to contend with when they're thinking of very large simulations. Okay, I think in that case we will bring today's webinar to a close. Thank you very much indeed, Mark, for your talk today. And to everyone else, I hope you can join us for our next webinar, which I expect to be in a few weeks' time. And if you are interested in the community forum, do please go to our website and have a look there. And yes, if you have any questions that occur to you later to follow this up, please go to our forums at Ask Biarchcel and you can post a question there and we will follow up later. Thank you all for coming along and we will see you again soon.