 It just worked out when it is gone. Okay, so good morning, everybody. No, sorry. So good morning, everybody. Today is the last day of the school and we have Marnik Berks from EPFL who will speak about the AIDA computing platform. So he will give a lecture at IEEE Computing with the AIDA platform. Before starting, let me just say a few things. Today the schedule is slightly different from the usual because the theoretical part of the lecture will last approximately half an hour so that you will have more time spent on the end zone. We will have the coffee break as usual and at a certain point in the middle of the end zone and Marnik will tell us when we interrupt. We will interrupt. And regarding the questions, as usual, wait until the end of the lecture before writing on Zoom or asking questions. When Marnik finishes to speak, you are welcome to write all questions on Slack and you can also raise your hand. Maybe we will have time for one, two questions on voice and of course we will post your questions from the YouTube stream. So that's all, please, Marnik. Thank you, Ivan. So as Ivan mentioned, I'm gonna keep my presentation a bit short just about half an hour because I think it's better to learn AIDA by starting to use it. But I still want to give you an overview of what problems AIDA tries to solve and how it solves them. And as I'll talk a bit about Materials Cloud Dissemination Platform that we are maintaining at EPFL. And so I do quite a bit of high throughput work. In my typical scientific work at EPFL. And when doing this, I face a number of challenges. When we want to run and to build a database of properties for a whole range of structures, you of course want to run a certain workflow for all of these different structures to obtain the property that we're interested in. And this workflow is basically a recipe of different steps, different calculations that you need to execute in order to obtain this result. And so I want to have a tool that allows me to automate this workflow process in a way that's robust and scalable so that I can run thousands of workflows simultaneously. And ideally, I would also like to have some error-handling features so in case one step in my workflow goes wrong, I can fix it on the fly and not have my entire workflow break. Another aspect that's very important in high throughput computing is data management. That basically when I'm running these calculations, I want all the data that's generated by my workflows to be stored reliably and efficiently. And I want to be able to share this data with other researchers or collaborators in a way that they can also use it easily and find the results that they are looking for. So we have to query this database. And finally, a very important concept also in science in general is that of reproducibility. If I've obtained a certain result and I've published it, I want other researchers to be able to reproduce my result easily by seeing how exactly I've executed my workflows. And so for in order to do this, we store what's called the provenance or the full history of the calculation. And so the tool that we have been developing in order to deal with these computational challenges is AIDA, which is exactly a computational science infrastructure for running these types of high throughput workflows while keeping full track of the provenance. And so AIDA is completely implemented in Python and is an open source code published under the MIT license available on GitHub. And so the main features are this scalable workflow engine that allows you to run thousands of workflows simultaneously by having a powerful engine that basically moves your calculations to the remote computer, runs them there, checks if they're finished and then retrieves the result and parses the files that in order to obtain the actual data that you're interested in. There's already support for many different types of our high performance computing resources with different schedulers implemented. As I've already mentioned, it keeps track of the provenance or the complete history of your calculations and then automate its fashion. And finally, it also has a flexible plugin system. So of course today's tutorial will be mainly about using Quantas to Quantifesive plugin for AIDA. But there's already a lot of support for other codes as well in AIDA because AIDA offers this flexible plugin system that allows people to basically extend the code to run for different types of quantum computing codes or computational science codes in general. So I've talked already quite a bit about this concept of provenance. So let me give some more detail on what I mean here. So if you can represent, let's say one of the calculations inside your workflow as a node here, you can imagine that there will be certain inputs that need to be provided for this calculation to run. If you're running a calculation of pw.x calculation in quantum special, you need to provide a structure, you need to provide input parameters and so forth. And so all of these inputs are also represented as nodes in the AIDA database. And so all of these nodes are then linked as inputs to our calculation. And similarly, when you have certain outputs, for example, if you've run a VC relax calculation, you might have a final structure, you might have some data on the magnetization of your material. These are again stored as nodes in the AIDA database and again linked to your calculation that you have run inside your workflow. And so once you start using a calculation as results to run our calculation, for example, you might have run your structure, optimize the geometry, you get a relaxed structure, and then you want to do another calculation. Of course, if you start doing this over time, you will build quite complex provenance graphs or directed acyclic graphs as you call them. And so once you start calculating something more extensive and more complex properties, such as, for example, here you have the molecular dynamic study of lithium in a cell electrolyte, these provenance graphs start to become quite complex. So we want these to be stored automatically and we want to be able to store this data automatically as well as it's created inside our database and repository. And below here, you can see also the image I used for my title slide. This is actually a representation of a database of one of our collaborators with over a million nodes and each of the lines you can see is one of these connections inside the problems graph. So this is definitely a very important concept when it comes to using AIDA. Of course, what AIDA aims to do is to compute complex properties. And for this, we want to offer and then also offer tools for you to make these complex workflows which require several steps in order to obtain a certain property, such as, for example, the band structure. And then, of course, in combination with the provenance, we want to be able to store this entire tree, this entire collection of calculations automatically so you can have a proper log of what happened in the past and others can look at exactly how your calculations were executed and also reproduce them more easily. And of course, we want to be able to store these workflows and share them with others so they can also run them. And so that way we can provide, we call turnkey workflows that are easy to use for users where you just plug in some basic inputs and get started running these workflows in AIDA. And so as an example here, you can see one of the workflows you'll be running in the tutorial today, the band's work chain, which as you might expect, calculates the band structure for an input structure that you provide. And so the nice part about these workflows is that it allows us to encode the knowledge of scientists in this kind of workflow. For example, this work chain was already written by one of my previous collaborators and now I'm using it. So I can rely on his knowledge and how to calculate the specific property. And I can also, of course, improve upon it. If I see a certain aspect of this work chain is perhaps inefficient or less robust, I can make changes. And so by working together on these workflows, we can build a comprehensive set of workflows that are very robust and efficient for calculating the properties of instance. Now, as we define inputs on what we call the spec of such a work chain, it automatically tells us or we can automatically specify what kind of input nodes we expect. For example, for the structure, we have a specific type data type that's stored in the database, which is structured data type. You also can provide a help file and then we have tools that you can basically automatically if you have defined your work chain, see what inputs are required, which is then easy for users to start running these work chains. We also want our work chains to be modular. For example, inside of this band structure work chain, you also have the opportunity to calculate the geometry. So optimize the geometry of the structure you provide. And this is then actually done by a different work chain called the PW Relax Work Chain. So because these work chains are modular, you can basically plug them into higher level work chains in such a way builds very complex work chains for calculating all sorts of properties. There's also other features such as input validation. So you can automatically check if the user has provided the right type of node or inputs for a certain variable. And then there's also features like error handling and protocols. And on these two, so for the error recovery, basically what we have implemented is this basic building block for your work chains, a small work chain that wraps your calculation. And so that basically checks if your calculation has finished successfully at the end. If it hasn't, it tries to see what went wrong and tries to handle these errors on the fly. If it can't handle the error at all, it will still try to just restart one last time because maybe you had a node failure on the cluster. So in this case, at least you can see that it fixes the problem instead of just handling the problem by fixing the inputs. We also have over the last year been working hard on making these work chains easier to use by developing protocols. These are basically a default set of computational parameters built from a test set of about a few hundred structures that offer a reasonable precision for most of these structures. And so in this way, you can simply load the work chain that you want to run, simply provide the code that you want to run, the structure you want to run, and then specify a protocol and submit it to the AIDA engine. And this is much easier way of starting to run these work chains. And this is exactly what we will be doing during today's tutorial. So building upon all of this, our team has also been working on doing concrete simulations in the cloud using AIDA lab, which is basically a Jupiter lab-based framework where we have developed a graphical user interface for people to start running these simulations. So this is very handy, for example, for non-expert users when they want to start running advanced search calculations for a certain structure, they can simply either upload this or select it from a database and then specify what they want to run and then automatically these simulations will be running in the cloud and then provide you with the result in the end. This is also what we're using today, which will be mainly using the terminal, because of course you all have experience with a quantum special on a lower level and we want to be able to show you how AIDA works in more detail. As I've already mentioned, AIDA is quite extensible by using plugins. And so we already have quite a big selection of plugins available. Over 50 plugin packages have already registered on AIDA plugin registry. And so that offers support for almost 100 different codes within the computational science community. And so we're happy to see that the contributions from the community are steadily increasing. And so our community is growing beyond just our development team. And of course we definitely welcome any more contribution in this regard so we can have AIDA run for as many codes as possible. That's something we've also been working on recently and I've actually submitted a paper for just a few weeks ago. Is this concept of common workflow interfaces? So typically if you write a workflow in AIDA, you will do so for a specific plugin, so for a specific code like quantum espresso. And you cannot simply transfer this workflow easily to other codes at this point. And so what the goal of this project was to make sure that we can actually have a common workflow interface. So if you want to run, for example, an equation of states calculation or workflow for a certain structure with a certain protocol, you can then specifically specify one of the codes that are currently supported, which is a lot of quantum codes inside this common workflow project. And then we easily run the same equation of state work chain for all these different codes. So for example, here you can see the equation of states for nine different codes that support a fairly boundary conditions. And so this is a very useful tool in order to cross validate these different codes and see if they actually obtain the same results when running for the same structure and the same type of property. Another important aspect of research in general is of course sharing your data. And AIDA makes this quite easy because you can simply export either your entire database or part of the database as an AIDA archive that can in turn be shared on an online repository. So you can either plug this into any kind of online repository that allows you to share this data. But we've also provided a dissemination platform called Materials Cloud, where you can on the one hand, store all of your archives, but it also offers more features. So you can think of AIDA as sort of the engine for running our workflows and doing our scientific work and Materials Cloud as the dissemination platform for sharing our work or other tools that we have been developing. And so there's five elements to Materials Clouds, archive, learn, explore, work and discover. Let me quickly go over these different aspects of Materials Clouds. So you can see it, maybe some of these tools are useful to you as well. So for Materials Cloud Learn section, as you might expect, this is an educational platform where you can find tutorials. You can also find presentations on all sort of concepts in doing computational science. So you can also, for example, if you still haven't had enough quantum espresso, you can find some old schools here as well that were presented some time ago. So all of these are collected on this educational platform for people to use and develop their skills as computational scientists. We also have the Materials Cloud work section where you can find all sorts of tools to basically get started with doing calculations. For example, here you can find a bunch of tools. And one example that may be interesting to people of this school is the quantum espresso input generator. You can basically upload a certain crystal structure that you want to run, specify, for example, pseudo potential library you want to use, what kind of structure it is, and it will automatically generate an input file for you to run with the quantum espresso. Other tools that are offered here are the Quantum Mobile Virtual Machine, which is similar to the virtual machine you've been using for the school. It offers basically computational science environment with AIDA pre-installed and other codes, such as for quantum espresso, but also YAMBO, FLIRS, YESTA, basically most of the codes that are available through the common workflows interface are also available in the Quantum Mobile. Then we also have AIDA Lab, which I've already mentioned, and the AIDA Registry, which are all posted on the Materials Cloud work sections. Then there is the Archive section. So here you can upload data that you've obtained for running workflows or just calculations in general. You can share your data here and it will automatically assign a DUI so others can also cite your data and your data will be guaranteed to be online for at least 10 years after you have deposited on the Materials Cloud archive. And if you've run your calculations using AIDA, you will also be able to add these direct links to discover and explore sections for your data set. And so for a discover section, this is basically you can sort of interface where you can for your materials of interest define certain properties that they are easily visible and a discoverable by people that want to analyze your data set. And then there's also the explore section where, well, this will also be using today's tutorial, where you can basically start exploring the provenance graph of the calculations you've been running. So you can start, for example, a certain band structure that will visualize it automatically. Then you can have a look at the calculation that ran this band structure. You can still get the input file, which should be quite familiar to you by now. Also look at the output file, et cetera, and then you can continue exploring the provenance graph. For example, you can have a look at the structure data you want to, that actually was used for the structure that was run in this calculation. Again, there's a visualizer for this. And then you can also, for example, download this. So basically it offers a way for you to explore your data sets and your full provenance interactively. And so as I've mentioned, I want to give this presentation a bit shorter because I think the best way to learn is just to get started with Aida. So let me give you a bit of an overview of today's tutorial before we get started. For the first step is to make sure that you can log into the Jupyter help cluster. The link is here. It's also in the Slack and you should be seeing my emails. So hopefully everyone has been able to log in. If not, be sure to let me know and then I can help you out. When you go to this link, you will see this site inbox and there's still this erroneous message which is a little quirk from the authentication system. So feel free to just ignore that, plug in your username and password of choice and then you should open this Jupyter Hub, Aida lab interface that you can then use to get started with the tutorial. Be sure to make a note again of your username and password. So in case you get logged out at some point, you can still log in because we can of course boot up a new server for you but it's possible that you're going to lose some data. In this Aida lab interface, you can on the one hand open a file manager so you can look at the files that you have created during the tutorial. This may be handy for opening, for example, a Providence Graph that you have generated or the band structure plots at the end of the band structure work chain. And most of the work will be done inside of a terminal. So you can open a terminal. This will open a simply another tab inside your browser. You can of course also open multiple terminals and this is where most of the tutorial material will be run in. We'll start sort of by just running a simple PW calculation through Aida. And by doing this, you will learn on the one hand how to import a structure in your database and provide some input for this calculation. We'll learn how to install Pseudos with the Aida Pseudo package, setting up your code. So it can also be stored in the database of Aida and provide as an input for the calculation. And of course also how to specify your input parameters and the K-point message you want to use for your PW calculation. Once the calculation is finished, which shouldn't take too much time, we'll show how to generate a Providence Graph and also analyze the output of the calculation using the Verly command line interface that Aida provides. Next, we'll move to running workflows because of course the purpose of using Aida. And here we'll be using the protocol that I've already talked about before to quickly run a band structure work chain for silicon. And so once this work chain is running, you can basically analyze and see how it is evolving as it is completing. In the end, you'll be able to plot the band structure of silicon and while it's running, we'll also show you how to explore the Providence Graph that's being generated by the work chain on the materials cloud explore sections. And finally, if time permitting, we will also learn you how to manage and query your data. And of course, as you start running more and more work chains to calculate properties for the structure that you're interested in, you will be having larger and larger data sets. So we'll import a little data set that was already run a few years ago. And it'll show you how you can organize your data into groups and imagine this as being folders with the subfolders. So you can more easily find the data that you're looking for. And then we'll also show you how to query your database. So we'll show you how to use a tool called Query Builder which builds a query based on the connections inside the Providence Graph. So you can find the results that you're interested in and plot them to be sort of high throughput analysis. All right, so all of that's what you do is to of course, thank the materials cloud and IEDA teams at EPFL in Europe and beyond. And the funding organizations first and foremost, of course, the Max projects and Marvel projects for providing all the resources needed for us to develop IEDA. And finally, I'd like to thank you for your attention. We'll now move to answering some questions. And then once we are, everyone has been successfully informed, we can move to the actual hands-on immediately and get started with using IEDA. Thank you very much, Magnik. So I see that I don't see questions in the YouTube stream. So if someone of the participants want to ask, you can raise your hand and you can speak or also you can write on Zoom or on Slack. So let's wait a few moments, whether there are questions, you can raise your hand if you want. Okay, so apparently everything was clear, but maybe more questions will come when they will try it for the hands-on. So yes, I guess we can proceed through with the hands-on now. I see your raised hand. Oh, yes, yes. Hello. Hi, thank you very much for the presentation. So I have very general question about the data because whenever we upload the data, there are like loads of data. So first question is that in IEDA do you, I mean, care something about like duplication of data that if you are upload like two times of data or someone upload the same calculation, which I have uploaded. So in current stage, are you taking care about this duplication or not? So as far as I know on the materials.archive, if you upload your data sets, it doesn't try to cross-transference with other data sets to see if there are similar data or similar calculations. So in this sense, there will be some duplication on the materials cloud, but this is also quite curated. So typically it will be a data set that you have for a certain publication. And so it wouldn't just be a random collection of loft calculations. But there is indeed, as far as I know, I'm not one of the materials cloud developers, but I don't think there's a way of basically checking if certain calculations have already been run inside an other archive that is hosted on-site the materials cloud. Okay. There is one thing though. Oh, sorry. Just to say inside IEDA, we do have features inside your own database to use caching, for example. So if you've already run a certain calculation with certain inputs, there are caching features available that won't, so you don't have to rerun that calculation. This is only, of course, if the inputs are exactly the same. Okay. But this is inside one database, so not on the home materials cloud. No, not on the whole of them. Okay. And second question is that you mentioned that you upload some of the metadata, right? Yes. When you upload, like it doesn't upload like entire calculation, but just some metadata from that calculation. If you upload, well, on the materials cloud, you can upload the data that you choose, right? It doesn't specifically have to be run through IEDA, but if you have run your calculation through IEDA, what IEDA will store in the database and it's repository will depend on what's defined for these specific calculations. For quantum-specific, for example, it will store metadata in the sense of information about this calculation job and information that's important for IEDA. But it will also store, of course, the in and output files, not all of them. For example, you can imagine if you don't want to store all of the charge sensitivities inside your repository because then your repository will go quite big. Or for example, wave function files, this will quickly blow up your database into a very large size. But typically input file and output file will be stored and it will also parse the output files and store certain outputs as node inside the database. So there will be a lot of information there that you can use at any point later for your analysis. So I'm not sure if you have a specific type of metadata or data that you would like to know about. No, I was asking in general because whenever you upload the data, if like program is choosing that which data I should upload and it's a little bit difficult because sometimes I am also not sure that which data I should have uploaded. So whatever the answer is, it's perfectly fine. So if you run through AIDA, I mean, AIDA automatically creates this provenance that connects all of your data inside the database. So then you would just, if you say you have a hundred calculations that are stored in a certain group, you can simply export this group and then you will get an AIDA archive file you can then share. I'm not entirely sure about which extra metadata you have provided on the GOS Cloud. This you will see as you start the submission process. But I think it should be quite all right if you've run your calculations through AIDA. Okay, okay, okay. Thank you, thank you. Thank you very much for the questions and for the answer. We have another question with, I think is very interesting and so it's about because yesterday we spoke about HPC. So we introduced the MPI, open MP, full parallelization and in general, HPC clusters and also GPU computing. So the question comes from one of the participants is, how are we gonna see how to use that which is a HPC together with AIDA? So maybe could you spend a few comments about how AIDA works with the HPC facilities? Oh, definitely. So for today's tutorial, I mean, to keep things manageable, we basically have just set up these cloud resources where it'll be running just on what's called the local host machine. So for today's tutorial, we won't be setting up a computer and a code for a remote machine yet. But of course there are tutorials for this on the AIDA documentation, you can find some how tos and how to do this. And so typically what you would do is you would for a certain computer like the Marconi 100 cluster, you would set up this computer in AIDA. You would set up how to connect to this computer and once all this is configured, you will specify certain codes that are defined on this computer. You can imagine the PW code, the HP code. These will all be defined on this computer that is stored in the database. And then you can start running AIDA on this remote resource. And so the engine will automatically take care of moving your input files to the cluster, submitting it to the queuing system. Just we support Slurm, SGE, Torq, several of the queuing systems are supported in AIDA. And then it will basically start checking, okay, what status is it in? Is it still queuing, is it running? And when the calculation has been finished, the demon of the engine will then retrieve the results, automatically parse the outputs for the type of outputs that you're interested in, the data that you're interested in. And then this will all be stored automatically in the database. So this today's tutorial, we won't be seeing how to do this, but it's definitely online in documentation. And of course, if you run into any issues in setting up a computer or a code on a computer, you can definitely ask AIDA either on the Slack or just in AIDA mailing list or on GitHub. Thank you, Marnik. Are there any other questions? Yes. There is another one. How about the space available in AIDA? From Cera. The space available. Yeah, maybe Cera, you could raise your hand so I can let you speak and to explain a bit the question. Okay, in the mean one, we have another question. Can you hear me? Yes, yes. The question was about the space available in the mean, in the sense that the outputs grows in volume in the chat, if necessary. So how the free space or how do we want to, do we have to pay for space at that? I don't know how that, in the cloud. No, so, no, I mean, if you're running AIDA lab in the cloud, I mean, that typically, we wouldn't, right now, of course, we're running things in the cloud just for the toil, but the amount of space that's required here is quite limited. But typically, if you were running AIDA, what you would do is actually installed on your own workstation or on a server that you have set up yourself. And then, of course, everything will just be stored on your workstation. So you won't be running AIDA in the cloud in this instance. Typically, like for example, in my case, I have my workstation TPFL and I have a nice little hard drive of two terabytes. And this will, for most calculations, and be sufficient to store quite a bit of data. Because again, AIDA tries to store a lot of information, a lot of output and results, but not massive files, such as wave functions or chart densities. Yes, so on my output, I upload them to the drive cloud because I have a lot of space there. So can I link AIDA with drive? So you mean store your output files or your repository directly on a cloud resource? I'm not sure if you can do that. Maybe Chris knows, but I have never done this. But of course, even you can automatically link it to your cloud service. But this, I would not know how to answer that question. Maybe, is Chris still here? Yeah, sorry, I don't know about this. So do you want to repeat the last bit? So basically the question is, I mean, if you want to store as data on a cloud resource, so is it possible to simply use, how to make that? I'd say define your repository on a cloud resource if you're running from your workstation. I don't think that's currently implemented. It's quite possible if you want to do it that way. You can connect over the cloud. There's nothing inherent in AIDA, though, to do that as such. I mean, you can set up your AIDA instance in the cloud and connect via the REST API. Oh yeah, you could indeed. I mean, if you have a cloud resource where you want to run everything from, I guess you can also just install AIDA on this cloud resource, right? And then simply run from there. I guess that's not an option. So if I may, alternatively, what you can do is have Drive installed in your computer and design a folder in your computer that is shared with Drive, that automatically syncs with Drive and then have your repository of AIDA there. The problem would be the database. The only problem would be the database. So it's not a complete solution. You won't have all your data there, but you can have the most weighty, the most heavy parts, so to speak. So if your problem is space, that can be a solution. Okay, thank you. Okay. Do we have time learning for another question or of course, of course. Okay, so here. Hi, Marnik. Thank you for your talk. I would like to know, in case we've had like a local HPC resource to which we have unlimited access, could we run AIDA permanently on one core and submit jobs to that same cluster? So you don't have any kind of queuing system, I mean. Okay. Okay. That's a question. Do you have any kind of, do you mean a local resource which you have permanent access? So there's no kind of queuing system designed because you have to share this resource with others. Yes, yes. So there would be a queuing system, but the idea is that you can have one core for you permanently. So instead of running AIDA in your computer and submitting it to the cluster via SSH, you would just run AIDA on the cluster and submit to that same cluster. Well, you can of course install AIDA on the cluster itself, I think there will be no problem with that as far as I know, but then of course you would have to also install your repository and everything on this cluster, right? This is maybe important also when for example, you're having issues because of two factor applications. So I think you've already done this. That's why I'm asking you. Yeah. No, exactly. I think indeed this is one of the solutions for this problems with installing AIDA on the cluster and running everything there. So I think there should be no problem as far as I can tell, maybe Chris or Francisco know a problem that I am unaware of, but I think this is actually used already. So installing AIDA on your remote cluster. Okay. Thank you. I'll check it out. I don't see other questions. Okay. Well, if there are more questions later of course I can always ask and during the tutorial as well, either in Zoom or on the Slack or just by raising their hand. So I think it would be good that we can already get started then with the first part of the hands-on. So the first of course order of business making sure everyone's connected to Jupyter Hub. I haven't checked the Slack now during my presentation but I will do so shortly. So again, if anyone has any issues in connecting to the Jupyter Hub, let me know. Then I can try to fix this as soon as possible. And for now I think I will simply stop sharing my screen and then hand the word to Francisco, my colleague who will be presenting the first part of the hands-on sessions. Thank you, Marnik. Let me see if I can already share my screen. It seems so. So you should all be able to then log in. This is the website. I don't know, Marnik, if you want to put the link on the chat. I mean, well, just in case somebody is not in here yet you just need to sign in. And then if you already have created the account yesterday, this will take only a few seconds or if not, it may take a couple of minutes. But while we wait for that, you can also maybe paste the address, the link for the tutorial material. And so you can see everything is here. So in order not to repeat much what you will already be reading from the instructions and what Marnik has said, I just want to maybe make a few clarifications on the key points that you're going to find in this tutorial. So as Marnik already gave you an overview of what you will be doing here. The first thing that you're going to start with is with the objective of then eventually running a quantum and special calculation. You have already been running quantum and special calculations this whole two weeks now. So now you're going to learn how to do this through AIDA so that everything will be automatically tracked. And in order to do so, you first need to populate the database with the information you want to use. And so the first thing would be interacting with this database. Here is this funny drawing just to illustrate a bit what I'm talking about. The first step is then how to interact with the database because you won't be directly seeing the database. You will be interacting in a user interface that would be how you've been working this whole two weeks like with the terminal. And so you will have to learn to create to send information to the database to create data nodes that you will want to use in your calculations. And then also not just create the nodes but use them and to use them you need to reference them somehow. And in order to do this, you will be using a property of each of the nodes. Each node that you create on the database, each node that exists on the database can be identified by one of two possible identifiers. One of these identifiers is the unique identifier UUID. And the other one of these identifiers is the PK or ID solely ID. PK or ID are interchangeable for us. And what is the difference between these two identifiers? It is explained in the tutorial and I may actually quiz you on that afterwards. So pay attention to that. But in principle, these are the two things. So if you want to say, oh, I created this node, when you create a node, I either will tell you, okay, this is the identifier for that node. So then if you want to do something with this node, even like show me the information of this node, all you need to do is use this identifier. So birdie node show and this number. So you will be learning how to use this throughout the tutorial, pay attention to which is the identifier of each node. And if you want to reference a node to see which is the proper identifier of that node you want to reference. So that is one thing to pay attention to. Then as Marnik said, you will run this calculation and then once you know how to do this, you know how to do what run a single step, a single calculation, we will illustrate how you can leverage the power of AIDA to then automatically run full workflows. And we will do this by just providing you with one of these workflows. And so you can see how easy it is to set everything that you need and to run it and automatically get the result of a complex process, what would otherwise be a complex process and series of different steps that will perform automatically. Of course you can check, there are ways you can check what's happening behind the curtains. You can see what this workflow does, but we will not get into the details of how to make this by yourself. You will probably get some ideas because you will see that we will be doing a lot of Python scripting. So if you catch easily into that, you can start getting some ideas of how to concatenate different calculations into one of these workflows. But of course to have the full advantage of the specific tools we have for this, you need more specific knowledge. For this you have, well, you can check the AIDA documentation. Also, Marnik, maybe link in the chat if you may, which has a lot of information on AIDA, which also includes a getting started guide with instructions of how to install it in your own workstation. This was one of the questions that we had previously about the space or how much it would cost to have, and now this is completely free, you can install it in your workstation and work from there and everything is stored locally. And we also have tutorials and different how to guides, which include making these workflows. But also if you want a more personalized, dedicated thing for this, we will also have soon in July, a tutorial dedicated to AIDA, one week tutorial, which you can see, we will talk maybe more about this later or you can ask us if you're interested. But you have all the information again in the AIDA net website. So the last thing I wanted to tell you about this is that you will be doing all of this that I already described, start interacting with the database, creating notes, referencing notes, running the calculation, running the workflow, and then we're going to use a tool in the materials cloud that allows you to, gives you another way of exploring your database. We will finally in the end see another final way of exploring the database with Flaviano. But before that, we will be doing some exploring using the materials cloud here. And it requires you to do, to start the REST API service in AIDA. So the way you do this, I want to show it to you now. I will show it to you again before the break that it will be closer to when most of you will be doing this part. But for those of you who are quick, I want to just show you what you have to do. So you have to open, we'll open one terminal. And after you have worked and familiarized yourself with AIDA, you needed to, and you want to explore it through the materials cloud, you need to somehow connect these two things, connect what the service here, where you have been working and the materials side, the materials cloud website. So this is done through two steps. First steps is just start the REST API and it's as simple as running Verdi REST API, which is the command here. You just start this and then it starts running in your terminal. You won't be able to keep using this terminal, but this already will be exposing your database. But since we are inside this Jupyter Hub virtual machine, we have to do a second step of exposing the database, which is the ngrok setting, which is here. Again, this is just so you have like a visual idea of what you will have to do, but the instructions are all already on the tutorial page. Just want to show you what it looks like and so that it blocks the terminal so this doesn't surprise you. Again, I will run the ngrok command. Again, it blocks the terminal, but it's exposing my database. Now externally, this is ready. And finally, you will have to go to the materials cloud website. This link will show you directly into the explore section. And here you have the connect your REST API. Typically, if you were working directly in your computer, you would use an address like the one that appears here, but since we're having to go through ngrok, what you have to do is copy this address that is here where it says forwarding and it says this localhost 500, the address that is previous to that one. You need to copy this one. And then, and this is at the end, you have to add API B4, I think. And then you will be able to connect to your database and explore it through this interface. So again, just to show you what the process is like because there are not many images of this in the instructions so that you have a visual idea of how to do this and what needs to be copied where. Again, I will try to do this again before the break so that you get these instructions again closer to where you need. So I think that's it for me. I will leave you, well, if somebody has any questions, we can look at them now, you can ask them now or else you can start with the tutorial and start learning through practice. Yeah, just for people who are a bit confused, Francisco is already just explaining how to connect to the REST API because this process is a bit more complicated on these two GitHub clusters, but you should just start at the beginning of the quantum espresso section of the material. So just start there and executing the commands that are shown there. And then this, maybe we can still repeat this later on if people are having issues with connecting to the REST API. Yes, again, I just, maybe to have like a spatial idea, I just show you the very first part of the tutorial what you will need to know about PKs and UUIDs and pay attention to this. And the very last part of the quantum espresso section where you will see, you will need to do this connection that can be tricky. So, but the instructions are there. This is just for you to get an image of what it looks like. You will be able to follow it through the instructions now more easily. So since some people are so apparently lost just to reiterate, you just have to start with the first quantum espresso part of the tutorial, right? So I've already put the link in the chat. You can see here how to get started with using the thirdly command line interface. You can import this structure. You can, and then look at how to import destruction to your database and just follow the commands that execute here and let us know if you have any questions, right? Next you'll move on to running a calculation where first you will set up the code that you want to run which is just pwco from quantum espresso. And so simply follow these commands. Some people also were wondering how to copy these commands into their terminal. Here you can use this nice little copy button feature. So you simply click on that. It says copied, then you can go to the terminal and then, well, I think you can also just do paste by clicking right click and clicking on paste. But typically you will have a shortcut for this depending on the operating system you have. For example, in my case, it's max, I do command V. In case you're running Windows, it's usually control V. And so then you can just redo that and then you can execute the commands. At this point, I don't have any codes in my database. Of course, this is still empty. But if you set the code, it will show you the code that you have set up. Okay. Marik, maybe you want to show them again where to find the link to the cloud computer system that they can run this. If maybe somebody lost the slack to the information. I put the link again in the slack recently, so it should still be there. I can copy it again if you like. It's just in the final part of the slack. So you can find the link to the Jupyter Hub cluster and also the link to the material. That should be fine though. I think it should be okay. If anyone has any problems again, just either raise your hand here so we can help you out or let us know on the Slack or Zoom chat. So it's a question from Amal on how to specify the PK. Well, you simply, if you want to, for example, to show both, for example, in the node you just imported, you can just specify it as the number of the PK. If your database is entirely clean, this PK will just be one. So you can specify it as a number for the command line interface for the birdie. There are other questions in the chat here on Zoom when doing output where are the files saved, which folder? So if you've already run the calculation and AIDA will have taken the output files that it retrieves from the calculation and it will put them in the repository. So there's two places where information is stored in AIDA. You have the database and the repository. The database is usually for smaller information such as specific outputs for the energy and support. And then the repository is for larger outputs, either larger matrices, which are stored as numpy, binary files, or, for example, the output files are also stored in the repository. Node repository is also, well, in this case, it's online because of course you're running on this Jupyter Hub cluster. But if you would have installed on your workstation, when setting up your profile, you will specify where this repository is. So you can do this, for example, on a hard drive that's attached to your workstation. And so you can store these typically larger files in the repository, something like GITS. No, I don't think you can compare to GITS. It's basically just a file repository, so basically a collection of files on your workstation. And if you do output LS, you will simply see for a specific calculation which files are stored as output files in the repository on your workstation. And then you can see with the output cats, you can then, for example, get the outputs. And if you want to store it in a different file, you can, of course, redirect this in Linux to a specific file using typical Linux syntax or bash syntax. I also see a question on AIDA Pseudo. So if you do AIDA Pseudo install SSP and then slash H to get the help, you will see how to also specify different protocols for this Pseudo potential. So I think the command for the protocol is hyphen P. There's also hyphen V, or the option for the version and stuff so forth. So you can install all of these Pseudo potential families for the SSP quite easily using AIDA Pseudo install commands. We also have support for Pseudo dojo Pseudo potentials. So those are also supported with automated installer commands. If you want to install your own Pseudo potential family, you can also do so, but for this you have to consult AIDA Pseudo documentation. This is a bit more involved. So to answer Christian's question, for the PK, you actually have to fill in the PK of the node in your database. For example, for the structure, after you've imported it, it'll tell you what the PK is of the structure node that is in your database. So you have to fill in this number actually when doing the Verdi node show command. Same result is shown on screen. I'm not sure, okay, this was also answering that. I'm not sure what Bindiya means with server question. To answer Amal's question, you have to put in the PK, the primary key of your code. So if you set up your code, you can now do Verdi code lists and see which is the PK of the node you're set up. So then you find that PK, you plug it into load code and then it should be working for you. Yes, exactly Amal. If your PK of your code is two, then you simply plug in code two for this code PK. So to answer Alberto's question, if you've already started the demon previously, which you should have done when running the calculation, the demon will still be running. So the demon, you can always check the status of the demon by doing Verdi demon status as I write in the chat now. So you can always execute command to see if your demon is running. If it's not running, you can always restart it with Verdi demon start. And if for whatever reason you want to stop it, you can do so with Verdi demon stop. So to answer Amal's question, you won't get any output because you load the code and then plug it into the code of Python variable. So it's normal you don't get any output there, but it will still be stored, of course, as code inside the code variable. For Ignatius question, well, the Verdi command is in the bash terminal. So if you're running Verdi shell, you're inside the iPython terminal. So there you cannot be running any Verdi commands. This is only when you're running bash. You can clearly see the difference between commands that have to be executed just in bash, with this dollar sign and also the code snippet will have a blue background. And then the command shift execute in the Verdi shell will have this in and out for the prompts of the iPython shell. And then you'll also see that the background of these code snippets will be more yellowish. Actually, if Manchester has made a very good point in the chat. So you can also, if you're inside your Verdi shell, execute bash commands by simply adding an explanation point. So in your case, let me, yeah, exactly. So copy this. So you can do this inside the Verdi shell as well in order to then quickly see again, well, what the PK of your structure is in case you've forgotten it. Alternatively, of course, you can open a second terminal as well. And what in one you can use a Verdi shell and in the other you can then execute bash commands whatever workflow works best for you. Thanks Tim, that's nice to hear. So for a structure PK, you can execute this commands Verdi data structure lists. Oh crap. I now noticed that I apparently only sent this to Francisco because he had sent me a message. But this is the command that you can do inside the Verdi shell to figure out again what the PK is of that structure in your database. Syntax error. I'll have to see. Ivan, we do have breakout rooms available in case you want to help people one-on-one, right? Sure, sure. It's a, I don't know, wait, you should maybe, okay, let me check, Massimo. Massimo. Yes, we, yes, we have breakout rooms. I don't see them. Oh, sorry. There was someone close them. Be careful, please, because all of you have the right to close them. Oh, okay. So someone has closed, they were all possible. Okay. Okay, now I can see them, we can see them. I see that that Amelia's problem has already been solved, that's great. So breakout room is no longer necessary, but thank you, Massimo. Hopefully, we'll probably meet them later again. So to answer Amal's question, so the PKs, again, these are primary keys which are identifiers in your database. So as I also mentioned in the tutorial material, it will depend on what's already in your database when you are setting up the structure. So to figure out the PK that you need to put there, you have to run this command, Verdi data structure list, I'll post it again. You can use in the Verdi shell, so if in the IPython kernel, and there you will then get a list of structures which should be only one, the silicon structure you just imported, and it will show you the PK of your structure in the database, and then you can plug that in the structure PK. To Florina, no, remember that, so using the exclamation mark before the Verdi works when you are inside the Verdi shell, but if you're on, so the Python shell, but if you are in the Bash shell, which I think you are, you just, you don't use the exclamation mark. So Daniel, to get the code PK, you normally should have already set up your codes with the Verdi codes of commands. And if you've done so, you can then again execute Verdi code list to figure out the PK of your code in the database. If you're doing this again in the Bash, you can just do Verdi commands, Verdi code list. If you are doing this inside the Verdi shell, you first have to add this exclamation points. Okay, well, maybe Francisco, you can move into your tutor room and help Alberto. Yeah, I'll be there. Answer Joseph's question. So the builder indicates that if you try to run the builder into the IAEA engine, it doesn't have the required value for structure. So at some point earlier, you provide the structure to the builder. And so it seemed that this step has not been executed. So to answer Amidio's question, the data factory, I mean, when you're doing the initial calculation, you actually specify the K-point mesh, if I'm mistaken. So there, it will not automatically determine what the appropriate K-point mesh is. When it comes to the workflow, that's ready to run with the protocol. There, well, we've done a series of tests with different K-point densities. And so we found that for a certain K-point density, you have a reasonable position for a low structure. So if you're using this default protocol, it will simply determine the K-point mesh based on that K-point's density value, which basically is specified by a distance of 0.15 inverse angstroms between the K-points in reciprocal space. So to answer Florida's question, when you're doing Verdi node show, you have to plug in the PK number of all these structures. So I see you've already imported to the confines. So now you have two nodes in your database, the PK one and two that represent strict silicon structure. So you can then just do Verdi node show one or Verdi node show two, and then you will get more information on that structure data node from your database. So maybe for our last question, well, so you have to make sure that the structure, the structure to providing of course is actually the structure data. So if you've loaded the node into the structure, you need to make sure that this PK actually corresponds to the PK of the structure data. So if you, for example, just now in the Verdi shell do structure, does it show you that this actually corresponds to a structure data or not? So to answer Amal, it basically, you have to figure out what the process ID is or the process PK is. You first should run Verdi, wait. And then you can see the list of process and you should get the right PK. So I'm assuming that the node in your database with PK2 doesn't correspond to the actual process node. This will most likely be either the structure or the code. So we are running close to the break, the coffee break. I just would like to say one thing before you can all have a rest and then continue with the total afterwards. For the second part of the tutorial, I'll put the link in the Zoom chat. We have to import a database from an online repository. So if you follow this link here, let me quickly share my screen. You will wind up on the second part or the second section of the tutorial hands on. And so here's a command that actually imports this database. Now we've noticed that on these Jupyter Hub cluster it can take quite a bit of time for this import command to complete. So maybe while we have our coffee break, it would be good if you can basically copy this command, then go to your terminal, paste it there, and then start importing this database already. So once you're completed with the first part of the tutorial, you immediately already have this data available in your database. So now it's technically time for the coffee break. Of course, if you want to keep on working on the material, feel free to go ahead. I'll just be here to answer questions still or moving to a Zoom, just any of the problems. Now we're seeing our question from Gosley. So again, probably the node you have loaded into the structure variable in Python doesn't correspond to a structured data node. So you should check again if you just do, in a set of very self-structured, does it show you that the actual node that you have loaded is a structured data native node or not. Maybe I can also move into a breakout room with you to help you out. And Amal, to open the PDF file, maybe quickly share my screen again. So you can go to the file manager. So if you just click on this in the main tab of the Jupyter Hub terminal, you can just click on the file manager and then you can navigate to the file that you want to open. Typically, you can just click on the file and it will open in a different tab of the browser. But if there is an issue, you can always also just click on the file and then click download and then open it on your machine. And you have to answer for now, for now last question. Yes, you can open as many terminals as you'd like. This may need be handy if you're doing, for example, one terminal for the bash shell and one terminal for the birdie shell. So to answer last question, of course you can finish all the parts on your own. If you have any questions, we'll be monitoring the Slack channel quite closely today and tomorrow. So we'll be happy to answer any questions you have there. Yes, so for the answer to a mouse question, so the PK you have to load here is the PK of the output parameters dictionary, because this actually contains the energy of your system. So to see what the output parameters dictionary note PK is, you can do Verdi process show. I think if you've been just following the tutorial, the PK should be 90, if not, you have to foresee with Verdi process lists, which are the PKs that are available. And then if you do do Verdi process show, it'll show you in the output notes, which one is the actual output parameters note. And so you need to plug in that PK in the load node command that you showed. Maybe I can quickly go in the breakout room with you, I'm all I think. And I should still be open. Okay, there's no tutor room with my name on it, but we can just move into the tutor room. And then I can have a quick look, okay, I'm all. Hi, I'm all. So it will indeed stay like that, because now basically the Verdi REST API is running. But in order to get this to work, maybe I can quickly help you out again in the breakout room. So let's move back into the breakout room and then I can help you set this up. All right, so hopefully most people have come back from the coffee break if they have taken one. Since some people have already indicated on the Slack and also in the Zoom chat here that they had a lot of issues in setting up the endcrock and everything to use the REST API and then explore their database. I just want to quickly show this again, because I think this point most people or a lot of people are already at the stage in the tutorial. So I'll simply quickly share my screen and go over these commands so everyone can understand better how to actually execute this. So this is in 1.6, 1.5 of the first part of the tutorial. And first, as it says here, you have to start the REST API. So the third REST API simply serves the data in your database so other instances can do get commands so you can get information from your database once this REST API is active. Second, because we're working on a remote resource or we're using this Jopper-Tuber Hub cluster on Amazon Web Services, we have to somehow expose this local URL to a public one. And so in order to do this, we have used this tool called NGROC. And so when this REST API is running, you simply open a second terminal. Again, you can just do this by going to the main Jopper Hub page and clicking on terminal. And so while your REST API is running, you'll also then open this NGROC, which will basically connect your local hosts where the REST API is serving the data to this public URL here. And so this public URL needs to be provided, of course, to the Materials Cloud Explorer section because this is actually the interface by which you can then explore your database. So then in turn here, if there's a link to the Materials Cloud Explorer section, so you simply click on that. You plug in this link, but you still have to add, and this is important, forward slash API, forward slash version four, P4. And if you do that, then you'll be able to connect to your database. And then this looks something a little, let's do this, there we go, like this. And then on the left, you can look for processes. I haven't run that much here, so there isn't that much data available. But so they're gonna start exploring your provenance of your database. So what's actually happening here is that, and you can also see this if you look at REST API here. So as I am going to explore certain parts of this database, and I click on it, this Materials Cloud Explorer section API simply asks for information from your database, which is now exposed via NROC, right? So the database never leaves your system. It's just still on the remote server that you're using now, but it's simply, the Materials Cloud simply retrieves this data via GET commands, and then can show it to you in this Materials Cloud Explorer section. And then if you have a certain calculation, you can look at inputs of your calculation files. You can look at the output files, et cetera, okay? So hopefully for everyone who's reached this point of the tutorial, you've not been able to set this up. If not, again, ask questions, we can answer in the Slack or go into a breakout room so we can go through the steps in case you have difficulties in setting this up. So, yeah, I mean, starting with the second part, but you can already start, maybe Slaviano can already give a brief overview of what you'll be doing in the second section of the tutorial. So if you've completed the first one, great, then you can always get started there. If not, no worries, of course, you can still continue on the first part of the tutorial and then continue later. So maybe Slaviano, I can give you the word and you can give a little overview of what they were doing in the parts on managing your data and querying. I'm not sure if you're already talking to Slaviano, but you're still muted. Thank you, Marnik. So I always say that what we have learned in the first part of the hands-on was we learned how to import these structures, for example, through the data databases, how to set up the codes, how to set up the calculation and running it, and then we did some analysis of our results and so on. For the more we learned how to run calculations through workflows that takes a very complex calculation and simplifying and automatizing the process. And now imagine that we have done this 1,000, 10,000 of times. And all this bit of information, I store these nodes in the data database. So structures, code, everything's our nodes and everybody, everything is in the same box. We have a flat organization of those nodes. So what we want to do in the second part of the hands-on is to create some organization, maybe to group together some nodes, maybe because they share some common features and properties together. So we want to organize these data, creating groups. And also we want to search for data. Now you might want to query to find specific groups. Let's say we want to find all structures that are a result of a relaxation calculation, not that we input, but they want the time result from calculations. So these also you're gonna learn a little bit in the second part of the tutorial. So I will let you through it. Please go to the second chapter of the hands-on and go reading slowly, bit by bit. It's important that we pay attention to every bit of information because there are new concepts that you need some time to observe. And also, yeah, let us know when you have any questions. So please do it slowly. For those that have already finished the first part, those that are still working on it, don't worry, please continue working the session one. And then when we are done, you can start session number two. And it will be here to help you with any question. And speaking of doing things slowly, this first import commands, as I've already mentioned before, can take a bit of time. So if you haven't done it already during the coffee break, then maybe just start it now and then have a little extra coffee break or read you some other sectional material already because it can take a few minutes. It's mainly because this is all run on this Jupyter Hub cluster on Amazon Web Services. And it seems that somehow this could take quite a bit of time there. So give it a go and let us know. What Marek means is exactly this comment over here. The first comment of chapter two, let's do it as soon as possible. It takes some time, it takes about 10 minutes or so. All right, so there's about 45 minutes left for the hands-on, but it seems that many people have already finished all the material, which is good to hear. And here I was worried that they might not have enough time to finish everything. But before everyone's done and already leaves the Zoom meeting, I would like to still share, wait, there you go, some details on how you can still connect with us after the tutorial. Of course, if you're still working on the material, we'll still be online until 12 30 in the Zoom meeting. And in the afternoon, if you still have questions, don't hesitate to ask on the Slack channel. If you are still working on material later, there's a quick reminder that the Jupyter Hub clusters will be shut down tomorrow at noon central Eastern European time, summer time assessed. So make sure that you remove any data that you're interested in before them. And then afterwards, you can always stay in touch via the AIDA meeting lists, or if you have certain issues that you'd like to raise, you can always go to the GitHub is a space, I have to update this link here. And of course, this is only a small introduction to AIDA and some of its features. In the summer from July 5th until 9th, we're organizing a longer tutorial for AIDA. So if you're interested in learning more, you can register there. I'll put all of these links in the Slack channel so you can find them easily. So other than that, you can also still continue working on this, wait, I have to share the right screen here. On this extra appendix section that we have here, this is basically a more basic version of a tutorial, which without talking about anything related to Qantas, press it twice to explain constants as provenance and everything, and the different types of nodes in more detail. So you can still work through that if you're interested. So with that, I would like to already thank everyone for joining. If you have any more questions, of course, don't hesitate to ask in the Zoom chat, raise a hand or contact us in the Slack channel. As I've already said previously, you can always still contact us this afternoon or whatever the time may be in the time zone you're in, and we'll be happy to answer questions. Thank you very much, Marnik. I don't see questions on the YouTube stream here, but I see that I think you answered already all the questions on Zoack and Zoom. Yeah, I think most of these were related to the tutorial material, so we've answered there. Yeah. And again, I mean, we'll still be here. The tutorial is over. I just already, because I saw people were leaving because they were finishing this material. I always wanted to make a few notes, but we will be here for at least another 45 minutes to help you out. Also, Zoom breakout rooms if you want. And again, continue working the material if you still have time until tomorrow, and the Slack will still be open even afterwards, as long as I'll be monitoring this Slack channel. I have it part of my Zoom Slack workspace, though. I ask questions there. We'd be happy to answer. Keep on working on the tutorial at your own pace. Okay. Do you want us to put the screen server here, or do you want to stay connected? Just let us know. Yeah, like I said, I'll stay connected until, of course, 12.30, until the hands-on's finished. I just want to already make some notes because people are starting to leave, so. Okay, so we leave the Zoom as is now. And if someone wants to ask you, even in Slack, or here in Shopify, or Rezender, of course, you are available for answering, right? Yes, of course. Okay. So thank you very much, Marnik. Thank you. You're welcome. Thanks for the invitation. And also to all the tutors, of course. Marnik, do you want to mention the main Aida tutorial that will happen in July? So I thought I did. But maybe it was very quick. But I'll put all of this information again in the Slack channel, so it's also at the end. So if you're at the end of the tutorial, what's next? There's a link straight to the registration form, so people can contribute to find quite easily. This will be a five-day tutorial where we explain more because, of course, that's quite a bit more to learn from Aida, as we can not all fit into one morning. So definitely encourage everyone, all these quantum-expressive experts now to join for this larger tutorial as well. I saw, yes, here, there's a question. Yes, we will meet, come back at 2.30 chest time for the special guest lecturer from Nikola Spalding. So yes, in case you, yeah, that is the next appointment. And I see that maybe Tony raised the hand, maybe, I don't know, is it? No, I just made a reaction of applause. Ah, okay, okay. Yeah. Thank you. Yeah, so this is kind of confusing indeed. Yes, because yesterday we used the raised hand option to have feedback. So when they finished the exercises, we asked them to raise the hand to let us know at which points they were. So maybe it is created a little confusion. To answer Christian's question, if you have run the command that actually creates this PDF, you can then go to a file manager. So in the main Jupyter Hub page, you can open a file manager and there you can then navigate to where this demo query PDF file is and simply open it there. I have hopes that sometimes if you try to open it directly, it doesn't work, it doesn't allow you to interact with it. In that case, you can just select it on the left and then click on download a bit above that. So then you can just start downloaded to your workstation and that way open the file. Let me know if this works for you. And to expand a bit on Afonso's question. So if you delete a node from a database, Aida will automatically delete certain nodes depending on the connections of these nodes in the database. Basically, for example, if you delete certain calculation, it will probably offer them also delete certain outputs that are used in this calculation. So be careful with deleting nodes because you always believe in that single node. You'll also be deleting other connections throughout database to make sure that this database stays consistent. I can actually wait maybe forward you to the documentation that explains how this exactly works. Just a sec. I'm about to copy the link if you... Great, thanks, Francisco. So, and you can also change the rules of how Aida does this, right? Providence consistency, exactly. So there you can see nice little charges explain better how Aida actually deletes nodes from the database. All right, we're getting close to 1230. The official ends of this hands-on session for those that are still working. Of course, again, if you still have any questions after we end the Zoom meeting, don't hesitate to ask on the Slack. We'll still be monitoring this until this weekend for sure. And of course, I would be remiss if you're getting to mention that at 230, there is still a special guest lecture. So of course, everyone still attend that. And maybe Ivan still has some final words he would like to say now. Yeah, I was just about to remember all of them about the special lecture. So here's, yeah, that's what, yeah. Yes, at 230, we have Nicholas Paldin. And so we meet again here on Zoom. I think that Massimo can now, when we close now, we can put the screensaver. And of course, the Slack channel is available for questions as usual. So thank you very much. Again, Marnik and all of the tutors for this Aida, Hanson and lecture. Thanks again for the invitation. We hope everyone was able to learn quite a lot. And of course, we thank also all the participants for their participation to this group. Thank you. Ciao. Ciao, thank you.