 So, good afternoon, sorry about the delay. So, I have not prepared a detailed talk to actually present a little bit of what we've done and changed the code, and also to encourage you to tell us really what your needs for the library are. So, this is a project which I've been doing with my colleagues Leon Poteem and Barry Searle in Desbury. And we work for this program called CSP nine. So it's a UK network supporting electronic structure. And we've done these developments together with our small Sophie Jonathan Yates into any pizza. So, we've been doing this for perhaps 18 months. And at the moment, I would say that we are starting the final stage. So, I'm afraid it's, it's not physics it's just Fortran. So, for a new library, so a library, it has to be easy for a code to use, obviously. So, compared to the original library the existing library we also want to enable some calling DFT code or whatever kind of code wants to use the library to access the whole range of functionality that is in Vanya 90 or post Vanya 90. In particular, we were given the task of paralyzing the library. And that causes actually a lot of requirements on the underlying source, which motivates most of the work. We want it to be callable from Python because to be honest, vanuization is often the first step in somebody's Python workflow. So, that's also a key thing for us. We would like it to be callable from C, but we haven't reached that point yet. The library also needs to be well behaved in that it can't cause a calling program to crash. So, as a standalone executable it's completely fine for Vanya 90 to protest and die. But if you have 50 GPU nodes, and you suddenly are killed by something trivial, it shouldn't happen. So, also we need to write a library which is not going to change. So, obviously the functionality in Vanya 90 is going to increase. But it's very important that we commit to a library now, which is not going to cause software developers to need to change their code whenever new functionalities added. That's, that's kind of important. So, to begin with, we, we found it necessary to to restructure the code quite severely because essentially until now, all of the data in Vanya 90 has been in a large module, so a large module which is used actually everywhere in the code. It is tantamount to a common block. And unfortunately, the, the way that the data in this large parameters module was was assigned and allocated meant that you must actually invoke the different sub routines of Vanya 90 in a very specific way. Before it was necessary actually to do a series of reading routines to fill up this large data structure before you could do any further calculation. So, so we've changed that by essentially breaking down the data into smaller data structures which relate to some specific task, and then bundling them up. And again, if you like into a more manageable module structure, and then to pass all of that data explicitly as function arguments. So, the way that we have reconfigured it is such that Vanya 90 actually now runs more or less exclusively as functions which are more or less pure so given the same arguments they will always return the same results. So, for a library that's tremendously useful, actually, I think that's not not dispensable. So we've made that happen. In addition to that we we now actually pass the MPI communicator which essentially embodies the MPI framework. Throughout the code so essentially the whole code now is parallel. That does not mean that the different parts of the code can use parallelism effectively. We haven't changed the parallelization strategy, but it is now such that actually all of the different sub routines could be made parallel, if, if one wanted to do it. The other thing. And so, so those things are fine. What is a little bit more invasive actually is the requirement to make the error handling in the code a little bit more robust. And unfortunately error handling in Fortran is really primitive. So, in order to give the kind of behavior that we want so specifically we want a routine which somehow causes an error state. And not to cause the program to crash, but to immediately return operation to the to the calling sub routine with some indication of what the error was such that when value 90 ultimately returns to the calling programs, the calling program can then decide what it wants to do. And in particular, there are obviously some conditions which are not errors, but which are also not perfect execution. So, for example, if a valuation process has not converged. It's not an error state, but it is a condition that the calling program needs to know about. So in that case, for example, we've, we've configured it such that an error code is given to the program or return to the program, and the program can then decide whether or not for example it wants to redo the vanuization. So, unfortunately doing that in Fortran is a bit of a pain and it means that, well, I'll talk about it. Yeah. With those changes in place, we have essentially then caused the main executables so both Fania 90 and post Fania 90 to be essentially wrappers around the library. What the library means really are the set of sub routines with their now rather long argument lists, but which, and this is what we're currently working on, can also be shortened with basically short hands and other rappers. The meaning of the executables now is nothing special they are, they could be any DFT code which is calling then the machine underneath. So, yeah, number seven is where we are now so actually designing what bits of data we can expect to require a user to pass the difference of routines, and what we need to kind of fill in for them. So. So, yeah, the first thing are the data types so in the past large amounts of data various kinds were stored in this parameters module, which we've broken up into lots of different types, the types we have, or we've decided to do that such that there is actually a pressure of types, there are many types, which each contain relatively few data members with some specific application. What we've also done is move the, the initialization of these data members from the parameters reading routines into the type definition. And the advantage of that is that in the library interface we can create an instance of these variables and they are already initialized. So in the future, when you, when you come to add new variables. Unfortunately, this means a little more work you need to find the appropriate place where that variable belongs, but I think that's not, that's not terrible, but also to give it a meaningful default. So obviously, for those cases where a meaningful default is not possible. We require the user to specify it somehow. So this is the first change. There are also some types. So, so here I've, I've grep simply for the different types that we've defined specific to the venue executable so, so when you're 90 just does the vanuization the disentanglement and some rather trivial kind of plotting. There are also some types which are shared between venue 90 and post 90. And there are a whole set of essentially properties variables which are only used in post venue 90. And we have essentially separated venue 90 and post venue 90 entirely. So, any code which is specific to one or the other are in the specific cases so everything that relates to venue 90 is now definitely in a file with venue 90 appended to it. Those things which are common to both don't. And those things which are specific to post venue 90 are in the post venue 90 directory. So, but there are there are many types now. And then so, so passing these types down the, down the, the country if you like requires a little bit of work. In particular you need to so here that there wasn't enough space you need of course to include the modules which define the types themselves, then you declare the instances of the specific types you need. So essentially going down from the, the top level call the venue 90 or post venue 90. You start off with many instances of these derived types and as you get to the bottom with the more specific things like get the gradient and so on. The arguments become simply their trivial values. So here in this routine we pass various different types, and in some cases we don't and we simply pass one variable from them. We've tried to do that in a reasonable way but unfortunately it does mean that all of the, all of the functions have arguments and there's no way back. In other cases so a lot of the code is is kind of simpler. For example, we got rid entirely of the checking of whether or not you're on the route to do printing. So, if you want to print or write to standard out. Firstly, don't assume that standard out is what you want it to be because actually we give standard out on mass back to the calling program, or more specifically we asked the calling program to give us a unit number which we write to. So you cannot simply write to star, because that may mean your output goes, who knows where actually, but in reality now also, whatever you write check that the velocity level is greater than zero because that is also the flag for whether or not you're on the route. So the air handling is really tricky. So, in particular the the air handling in MPI shot about it is simply the case that Fortran doesn't offer a nice way to do it and this means that you have to change the way you write code. It's a little unfortunate to do the air handling or to emulate air handling. So we find an error type, and the error type scores a string so an error message and a number. So, in the case of an of an error. So, so here we check whether or not an allocation succeeds if it doesn't, then the the error variable is set something greater than zero. And we call the error handler if you like, all this does in reality is allocate a variable called error and gives it a message. And also, is it an MPI operation so all the processes all the different ranks need to be aware that an error has happened on any of them. So that imposes also some some restrictions in an event, you discover an error you set the error flag, and then you return. So, unfortunately, you have to return because you can't, you can't throw or catch. So, you have to give up. But you need then to check any function that could set an error to see whether or not it has set an error. And if it has, then you must again return. So here, for example, you call van phases. You presumably do some allocation which may fail. It may or may not have set the error condition in the error variable. And if it has, you have to also return. As a programmer you need to now add these steps for every function that could take an error or could generate an error. That's a little laborious. The particular way that we codify the error in addition to the string and the number. So, so the string is given obviously here in the argument, the number is given by which function you use. So we provided five or six different error allocation routines which give a different number. So, in reality, the numbers are unimportant, you get a negative number if it's something that could be fixed by for example iterating further, or a positive number for some particular errors. The kinds of errors that you have, for example, here is a lot, but there is also input if the user has given some bad value or whatever it doesn't matter. In any event, this routine also allocates the error variable. So the error variable it does not contain data, but it is allocatable. The purpose of that is is rather nice. So there is a, if you like a trick that upon the entry to a subroutine and Fortran, any allocatable data object, which is marked intent out will be deallocated. So we can, if you like, hot wire that behavior to cause a real error handler to be called. So, so in this case, if you enter a subroutine with an error which has already been set, then the deallocation of the error variable will cause a catastrophic failure. But the only way that that could happen is if a developer has failed to check the error value upon the return from a program. So, so this provides actually a tool to make sure that a developer hasn't missed the opportunity to set an error. And as such, so because this would crash the code because it calls in this case you see untrapped error is called if this allocated variable is ever deallocated. And you don't want that in production code so you can comment it out. So we think this is of use particularly for development. So when things are working, it should never happen. But it's there. The other the big headache really also with with the error handling is that all the processes need to know when an error happens. But how do they be made aware of it. So, the reason that's a problem is that in MPI. You can of course send a signal that you have discovered an error, but the other processes need to listen for it, which means effectively that you need to pull periodically for an error condition. And that's actually what we do. And the way we do that is by having a collective reduce so we reduce the error variable on all nodes, and we do that before every MPI operation. And the way that that works is that you can arrive at this reduction clause from two ways, either everything is okay on that rank and you get there before an MPI operation, or you get there, because something's gone wrong, and you've arrived at the error handler. So, so that works very nicely. The only problem is that, well, there are two problems. The first is that you have an additional collective MPI operation for every MPI operation you have. But it's also just one integer so it's not, it's not likely to be pathological. The second more serious thing is that this strategy of reducing the error variable only works if you only do collective operations otherwise. Because if you also or additionally do point to point communications, then you cannot guarantee that other ranks are in a position to collect the collective reduction on the error variable. So in practice that means that you cannot currently with this system do point to point communication, or you can do it but it may, it may fail in terms of the error handling. So if the point to point generates an error, you cannot guarantee that the other ranks will be in a position to detect the error. But in practice we noticed that there was exactly one use of point to point in the code. So we've, we've reformulated it so they're now non. So, but if somebody can think of a way to do point to point with, with error handling we would, we would love to know, because it's a puzzle actually. So, again, we think so we don't know what in real terms what the reduction of an integer on a practical job means is that a serious overhead. So you can deactivate it if, if it is, we don't know. So having done those things. We reached making the library interface so what constitutes a library interface essentially are the subroutines which obviously you have written and we have given argument lists to or alternatively you can invoke essentially the whole of a 90 and that's essentially what the old library did. So the old library would also pass the input file, for example, and set up really the same data in the same way. Obviously we want something something different we want a list or a set of interfaces which are as easy to use as possible, but which are also not complicated. So, what we have elected to do is to keep those arrays which are physically meaningful and essentially fundamental to funeralization exactly as they are and we would expect the user to pass them by user I mean a DFT code developer. And in addition, we have prepared like agglomerations of these defined types with their defaults and in the DFT code you would create an instance or, or however many you like of this complicated type. Using that type you would modify it setting variables or options that you want. And then you would pass that also then to the routines which do the actual numerical work. So that's our plan. So at the moment that this is how the library interface looks. So you can have a look. I think our branch is not public but we can make it public this week. And we have so really simple, simple functions so at the moment we we simply also can read the input. So if you want to you can you can replicate the old behavior, you define an object of this complicated type called whatever this instance is called helper. There are also some options for plotting and transport which 90 also takes. And then simple stuff like the m matrix, etc. So this then essentially becomes our, our library interface. What is missing from it is a nice way to set values. And it's a little bit tricky because, firstly, in the passing methodology that exists in many 90 there was actually a lot of detailed and important checks on the sanity of inputs. And it is also the case that different inputs affect the allocation of variables which are required at different times. So it is not possible to provide a simple setting functions for all of the large number of data members that we need. So instead what we have in mind is, again, to go through the, the reading parser, but no longer acting on an input file but acting on a, on a data stream, which we set up. And we would do that so where you specify exactly the, the command string that you would put in an input file as a string to set an option. And then also here some, some value, the value would not be stored as a string it would actually be data. So there would be sets of different functions to accommodate different kinds of data and arrays. So those would then be interpreted by essentially the passing routines which exist now. So, so that's what I actually would like to do this this week. It would be extremely helpful if you can tell us what you would like or require as a program which might call the Vanya 90 library. So, because essentially at the moment we are, we are really defining what these interfaces are. Yeah. One thing that we have noticed so so my colleague Barry has been using the, the Python wrapping software developed by James Kermit called F 90 rap and F 90 rap. Essentially calls F to pi. So that's actually a huge set of, of C interfaces and then it interprets and builds a Python interface to the objects which we've defined. And that works essentially out of the box. So, so this example there are a couple of examples in the source, which, which do exactly what you expect you set up some some numerical arrays. In this case, we, where are we. We set up the list of K points here explicitly. We read the input from a input file. So this is one of the one part of the library interface is a routine which will read the input file. So in this case it reads options from the input file, the set of K mesh K points that are needed for this example. And ultimately then reads the, the overlap matrices, and calculates the maximally localized linear functions. So this all already works is, is quite nice. So, so that's nice. So in this instance it is the case that the Python, Python of course has a nice object orientation and understands our module structure very, very transparent. So, unfortunately, that's, that's not the case for for C. So, so what do users need to know. So, so nothing because actually all of those changes don't affect the behavior of the code at all. One thing which which is the case is that now. There's really no separate software between the main executable and the library. And in the past it was the case that users would compile a parallel executable for 90 and post 90 and in the same directory, partially recompile the cells to give a serial library. So that no longer works. So, in some sense the compilation environment is different because either you compile the MPI version or not. So that's important that actually that caused an issue for somebody who wants to do it. So, what developers need to know you need to know actually what all the different types are because we've taken all of your data and package it up differently. So, unfortunately, when you change things you need to go and find what was that. But we think and I think we and our son Jonathan kindly went through this kind of laboriously. It should not be insane. So, but, but is also a big change. The definition of the read routines has also changed. So, if you add new options for example that would be affected here. So, the read routines will change. Again, when I changed the behavior to also allow reading from a stream. So, so those parts of the code are subject to change. One of the things that's changed is that because we separated the read routines from vanya 90 and post vanya 90. It is no longer the case that each of the codes recognizes the keywords from the other code. So, one has to clear them out explicitly. So, in the case that for example you add a new keyword to post vanya 90. You have to also put it in this function which will get rid of it in vanya night. So, that's a detail but it needs to be known. It's really useful to give defaults to new variables because in the in the library use where a df code has caused an instance of our large variable our blob of data. So, it will then go through if like a reduced form of input passing, but it is not the same param routines that have been called before which would do a lot of assignment and initialization. So if there is a data variable which can be initialized to a sensible value then it should be. And it also should be in the type definition. So, if that if that makes sense. So, there are a few in error dot f 90 you can you can choose one of five or six that we define or add to them if they're uncomfortable. Essentially they all do the same thing by allocating the error variable and actually doing an MPI call if it's an MPI build they otherwise assign a number. So, they're really the same. The bigger thing is to check the status of the error variable after all the function calls. That's a little bit laborious but there's no other way. So, if there is another way that we would be really glad to get rid of the 1000 times return we had to it's in the code. So, again, also the standard out. The standard error I don't think we touch anywhere. So, so standard error at the moment is only touched by the Vanya 90 executable. So nothing in the library code nothing below the main routines actually should be touching standard error at all. No, and yes, and this restriction on on collective MPI operations, we honestly cannot find a nice way to make point to point work. It may not really be possible. So, in some sense, the propagation of error is an intrinsically collective problem. So, yeah, so where are we so the Python interface, my colleague Barry is working on at the moment. My colleague Leon is working on actually a use of the library in the plane wave to Vanya code. So, this week I would like to write actually the set of routines and that's a little bit important because the set of routines are the way in which you would provide options to this big blob of data. So, in addition to obviously the simple, the simple matrices that you have. Later, when we when we have if you like a polished library definition. We'd like to see actually how we can improve the use of MPI so at the moment. The parallel decomposition is rather simple. And one of the key things that we were assigned to do with actually investigate cases where the MPI decomposition used by the DFT code is of very different nature to what would be optimal for Vanya 90. So, extension of the library interface might be actually a way to reconfigure actually or perhaps make a partition of the MPI communicator to do more efficient MPI. So we've made a lot of changes to to your code, but we think they're not that so they are improvements. So, on one hand, passing arguments actually comes with a small performance overhead, which you notice that you can measure. But otherwise, what we haven't done but what becomes possible is that in many places in the code. Because it is not always clear what the condition of the large param module was, there are calls to functions just in case. Particularly in the in the post funny 90. There are many times where the real space Hamiltonian is, or the function to calculate real space Hamiltonian is really is re invoked, even though it is not necessary. Just in case it might be so now that the real space Hamilton for example is passed explicitly to all of the functions that would need it. It's not necessary to have this kind of strategy of, of just in case. So some changes can happen, I think, because now you no longer need to rely on the condition of the underlying common address space, you have simply the arguments that the function has, and they are guaranteed. So, so I think that in the future will lead actually to some simplification even if at the moment actually that obviously the argument lists are a little bit cumbersome. So, I think we have, we have some time so really we're very interested to know what you think or you would like in the interface definition. So, thank you. Thank you very much for sharing your work I think this is really of param on team person and relevant for all of us users and developers like first we have time for questions. That's a lot I mean I think this is super important. And so I have three small questions. So, could you make your branch, you know, could you create a public branch as soon as possible on the, you know, when you get up and also make it very clear when you intend to merge it with the developer. So as we have enough time, you know to test because I expect this will take us some time to adjust and also to give you feedback. So, so the, the, the state of affairs is that the, the use of types the argument lists the error handling are now in the main developer branch. So, if you're like those constitute the painful parts. So those are those are actually now. The main branch. What is different now are the additions of the library interfaces, which are on top of that, actually. So, Frank. So, so certainly yes we will indeed make it make it public because it would be nice also to have, have other people join in and. And if we have an issue because I think it's difficult, like for me to tell you exactly, you know, yes what the potential problems would be if I don't, you know, try exactly. The situation is that now that all of the, the, the refactoring if you like is done. The library interface itself is in one file. So there is a file with these, these sub routines. Yeah, which is actually all rather straightforward. So, all they do is call others of routines. It's just rather straightforward. So we would be very happy to have the people working on it with us. Yeah. Yeah, so let us know when you make it public and then, and then yeah, so that's my first then. So the work's been in a private repository because I mean it was messy, and there was quite a lot of dirty laundry that needed to be done and we sort of just felt that was best done behind. But I know, but I think now, Jeremy, I mean, great. Now we could actually for sure of everything. Yeah, the branch on the main repo, because it's essentially everything we're doing now is on top of the existing functionality, so it doesn't. Nothing we do now breaks backwards compatibility in any way or changes anything anyone would do. So that's okay. The existing changes certainly, certainly did. But not, not anymore now. So if you like we're really over the, over the hill of. Yeah, but so if you make for example this file available then we can go through and yeah. And so the second, do you plan to implement or increase support of the test suite specifically for this library. Yeah, so, and but I mean, not testing when you're 90 calling when you're 19 lab remote but really maybe writing a small program that would pass fake parameter to that library and then have, you know, a set of reference data, and then that would be easier for other code because I think external code can have different stages. Yeah, that's for example when you 90 itself would never experienced. And so if we find some of it we can easily ourselves at the new test with those specific parameter. So make it that sort of framework to allow us to easily add a new test with those weird parameter and say that might be relevant to us. Yes. So, so indeed, so we have two ideas on these lines. So firstly, so Leon is going to, to make a pull request in consumer espresso for the stuff he's doing in PWS venue. So, essentially he is replacing their call to the old library with a new one. So we're going to make it a minimal driver in the test framework. So there is a minimal driver for the old one, and essentially will be something similar for the new one shorter because it's a simpler interface action. And you'll add one or two tests to the test. Okay, that's fantastic. And finally, but that's, I guess a detail, do you plan to support some form of automatic documentation like for them, because I saw that you had double exclamation mark. So, yeah, we, we took what comments we found and kept them and right, right. Yeah, we haven't added a lot of documentation. Thanks. We're a little old fashioned. Thank you for all this work. And I have a question about this functional paradigm. Yes, so currently, I have little experience of this new interface but say I want to develop a new functionality inside of some inner function. And I need some input variable that was not there already. For example, guiding centers. Yes, then previously what I had to do is just write use parameters only guiding centers. But now I have to add the guiding center type in the input arguments. I have to do that recursively for all the functions in the cost tag to pass the argument to the inner most function. Yeah, so that becomes quite complicated when developing new functionalities. So, it can do. Yeah. So, indeed, so it is also the case that in different parts of the code variables are needed, or are used, if you like at the bottom of the, of the call structure. In a way which is absolutely unnecessary but which requires them that they are present in argument lists everywhere. Unfortunately, that is slightly true. We are adding new variables. So, we found that in reality, it's very unusual to, to have a new variable in isolation and the new variable that you want to add probably will already be grouped with other variables which exists for the, for the kind of task that you are doing. So, in that case, so let's see, I've had an example. For example, if you, you added something to the site symmetry type, it's already being passed as such. So, it, anywhere that the site symmetry stuff is being used, it's already present. So, it isn't as bad as it could be. But in principle, yeah, in practice, normally you put it in one of the predefined types and it's fine, and it will be present where you need it because the other things you need already there. That's true. So my question is, would it be possible to expose some like some constant, not constant, but some input parameters that will never change when one set up as a global constant, instead of passing to the, as arguments. Um, so, um, we don't really want you mean so by global constant you mean, so an unchanging parameter. For example, like the K mesh information, like size of the K point grid, that will never change in one hundred ninety run. But it changes from calculation to calculation. Okay. Oh, this goes later. I will try to answer the comments, which is the reason to try to avoid global variables, which in a sense is what used to be before and what you're suggesting is that if you just run one money ninety run, that's not the problem but you can imagine a DFT code wants to run 20 synchronizations at the same time in 20 different communicators. So if you have a global variable, maybe the K mesh could be different in every calculation. That's why Jerome and colleagues had to really speed it up and you're right that it's going to be more cumbersome. Hopefully, already the fact they are grouped in types, you might already have it. And that's the fact we we spend some time to together. Jonathan, Jerome, etc. to try to collect them by by, let's say, logic, logically, you should already probably find it but if you don't indeed you will not need to add it. Actually, this is a good point. Maybe to think about what we declare as public interface in the sense probably we don't since this may happen. If we want to use it as a library, we don't want people to, even if there are four transcode to use directly the internal routines, we want to use those few functions that Jerome showed before. In this sense, actually, that the comment I wanted to give is it would be good if today or tomorrow already we kind of collect together, definitely somewhere with anybody here who's using or wants to use the library to start collecting use cases and a little working group on this. Yeah, so just to add to that I mean it's it's a it's a balance between having a very well defined state for the calculation. So that you can run as Giovanni said multiple instances that have clearly defined states and you don't have these sort of variables that are floating that you kind of never know what they're set to, and there's an extra overhead in development, which is as you say, if it turns out at some point in the future you do need to add a new variable that somehow has to be added to every, every subroutine call in the entire stack. So potentially that means every code that interfaces to venue 90 might have to tweak its interface to include this additional very in the worst case scenario, right in the worst case pathological scenario, where that variable makes it all the way up to the top. So there's there's sort of this balance between those two things I think. Exactly. So that's why I was saying that probably one should just use hopefully we shouldn't have missing things in this that we should really design this in few 10, I don't know maybe cannot a few more interfaces very well so we have everything there and those should never change because those are the one code should use. That's the plan. I mean, we can always add more point those should not change our career. They're always way to do it and we can have a new version be to whatever and more parameters but it's going to be a very painful to maintain so I think this design discussion with very, very relevant. So you mentioned F 90 rap I'm all that familiar with that. How does it deal with user defined types and mapping from like Fortran compound types into Python like native data types. How does F 90 rap map the Fortran compound data types into the native Python data types in terms of dictionaries or arrays. So how it internally does it. I do not know. No, that it is capable of doing it is. Okay, but I don't know how in the Python code that you showed. I don't know if you could navigate to that chart. I guess I can't see it all that well but it appears that you're calling the input reader routine and Python in this case yeah and what what is like the return value look like or what what is it how does it populate. So, so in this case, for example, it affects data. So, so the state of data is changed by this call. So data is is is an instance of our what in the latest lines is called helper. But this composite data structure with with lots of data structures in it. So, where are we. It's a global type so live global type is essentially this a large container for all of the different data types that we have. Okay, so, but there is a. So, the class structure here is the same as our, our module structure. Okay, well, hopefully I'll talk about how that actually manifests in Python. I don't know. So I'm not a Python programmer. But as I understand, I think, I don't know if 90 rap, but this wraps up to pie from one pie. I think it natively maps basic types and all Python or Python dictionaries just a race number race and strings and integers. And what you can do, which probably I guess from the end on I guess is what happened there you can call through for turn to initialize the types and get back a pointer to it in Python, if you can use. I'm sure there is an easy way to map from dictionaries directly into the types. In the end, the way I around it is probably what was saying to have set the routines we pass a string that say, and there is some parsing which takes care of setting the correct pie for turn types. So essentially you pass on the strings and floats and integers from Yeah, it's possible you're right. Yeah. But this is not a, it's not a dictionary it's an object. Yes. So first of all amazing work. Have you thought about incorporating non blocking MPI calls to facilitate the error handling. So yes, we thought about this so of course you can write in a, in a, in a window. The problem with that is that you you still need. So firstly, there are two problems. So the first is that you need to pull it occasionally on all of the other notes. And the second problem is that you have a problem with the ordering it can perfectly happen that the. So if you consider that one rank is going to fail. And the other ranks are fine. It can happen that the other ranks all check the status of the window before the rank which is going to fail has had the ability to write that it's in the error state. So in that sense then all the other nodes believe that everything is fine when actually then they're not. So, so the problem there is you need to synchronize which exactly what you can't do with not blocking MPI. Unfortunately, it doesn't help. Unfortunately, yes. Okay, so this is one, one question here and then I wonder. I guess more of a comment I mean it seems like the way you've done their handling is very well thought out and it looked like you referenced the error effects I guess that this is another library or approach to doing your handling. Maybe it's general enough but separating out the functionality and actually having this be a FORTRAN library for doing error handling. So, Giovanni, yes. Thanks. I think in the sense error effects is doing that the point is that because FORTRAN as a language doesn't have certain features. You cannot really make a library you can make a library of little wrapper or helper scripts but in the end is more like a description of a coding pattern as you know my Spain you have to return after setting an error it's on something you can write in a library and like a throw in another language or a raise in Python. So unfortunately, you cannot really get to the point of writing a proper library what error effects does is since Balint also developed a custom preprocessor for Python. Some of these things are written in a much easier in a one line raise operation let's say because the preprocessor would then convert it into a few more lines including the return statement for instance. Yeah, exactly. So if you want to use such a preprocessor error effects would also help you and I don't know when I 90 if you're changing a lot and we're not using the preprocessor I don't know if the way. Errors are dealt with are different or really the same as our effects is I don't know but just explain why it's not a library if you want or it's not so easy to do it. Oh, the spirit is the same so the difference is that we also stick in that this MPI reduction in the middle. But yeah, it's essentially the same. I just a technical comment or I guess we can discuss later just mentioned. I think the most important discussion is what we said before to have users of the developers who want to use the library to discuss. One thing maybe if you have time we can chat is we have as you mentioned we have a number of places in the code where we have to know what are the inputs and maybe two or three files in which you have to reference them because of the various executables. What we might try to do to simplify our life is maybe to have a but a Python function we generate the Fortran code. So we have a same center I don't know if it's possible we should probably try to see if it's possible we have a center reference. YAML of all the Python variables with some metadata type I said the default. So we generate this two or three Python is hopefully Fortran files automatically maybe it's not possible but if it's possible with avoid problems and since we have get abuctions. Yes, if a person forgets to do it at commit time, it would get an error. So, just as a thought, maybe, it's reasonable. Anyone. Hi, so, so you asked like, you know, there's a suggestion for something that would be useful for a user. So for example this last function bunny rise right so inside that function, the code like iteratively finds a gauge that is maximum localized. There is like a Python interface where I could go inside of that loop, and maybe modify things while they are being diagonalized so for example, let's have eight bands. Let's have eight bands and I want to run as first four separate from the next four. And then you know every time the code wants to mix first for the next four I just said that to zero, or you know something like that where I could like, while it's diagonalizing I'm kind of modifying you so you can find a different gauge than the one that it would find itself. Is that possible or is that like a different. So, so of course it's absolutely possible in that the, the process of annualization happens in a couple of subroutines. Unfortunately, there's kind of a slightly megalithic main routine. So you need to do some change to extract the individual iteration parts into a smaller function, but then of course you could interface to that and manipulate things between calls. So it's not currently possible because the program flows uninterruptedly. But you could do it without massive hassle. Yeah. Thank you very much. If there are no other questions I think we can thank you. So this was the last talk for today. Now we have a small coffee break upstairs. And after then it's supposed to be, you know group discussions and later on, probably during the week, this will turn more into coding. So, I think we can, you know, just form groups. I think some of the discussion that started. And if not, we just meet the seven p.m. right here at the tabernacle model. And of course all the participants are invited to join.