 video. All right. Well, I think all of you guys here know Rolf needs to visit from the Netherlands from TU Delft. I'll let you kind of introduce yourself, Rolf. But there's one thing I wanted to say. Oh shit. Yeah. That it turns out you're the MacGyver chair or something. And I wasn't sure if the MacGyver is like the Richard D. Anders MacGyver. I don't know. So I'd like to hear about that. And what I know from you, if I were trapped in a room, you're the guy I want. Anyway, so welcome. I'd help you escape. Thank you for the introduction. Yeah. So it's not a chair, but I think now that we should institute one. I think it's not a chair of science and escaping from rooms. Yeah, that we were just talking about my PhD defense. My PhD was on sensor design. And in that phase of my career, I tinkered a lot with household appliances and how you can use them to monitor things that we as hydrologists and environmental scientists are interested in. And that kind of snowballed out of control. And we've been doing the MacGyver session at the EGU and AGU meetings for 12 years now. And it's been tremendous fun ever since. So I mean, if any of you does fieldwork where you like make your own stuff or make stuff work that doesn't work, submit something to our MacGyver sessions. But that is one leg of my career. Another leg of my career is what I'm here to talk about today. I'm an associate professor back in Delft. And today I want to talk about the motorcycle platform. And in the introduction, you could already see we're going to look at how it takes away the headache of computer science to let hydrologists just do hydrology. And since within the CSDMS community and people online, not everybody is a hydrologist. Basically, what is a hydrologist? A hydrologist is someone that looks at rainfall and then tries to figure out how much of that rain ends up in streams and when. And yes, I took this video yesterday while biking back from here to downtown. This is Waller Creek. I was over the moon happy that it rained. I've been traveling the US for two months now. And apart from two days in Crescent City on the coast of Oregon, it's been 90 degrees or higher ever since. So yeah, I'm happy that I could see some actual hydrology in action. I'm less happy that I after two months, very comfortable talking about degrees Fahrenheit with you. So I say that hydrologist is interested in understanding how that rainfall gets in the river. And the tools that we use for that, this is very common in most of geosciences, but the tools that we use for that are models, hydrological models that contain our knowledge on how these processes go. Sometimes they contain like real physics descriptions of these processes. Sometimes they're just statistical models that have been fitted between rainfall and measurements of observations. But in general, you have some kind of forcing data set, usually rainfall, but it also requires temperature. And you feed that into a model and then the model gives you a hydrograph, which is what we call these soon to observations. I mean, and there's more parts to this model, which usually some kind of soil moisture state that you can get out, tell something about is, are we going to get into a draft? Do we need more irrigation? This is your typical hydrological model setup. And talking about this forcing there's a lot of choices we can make in what to use as a forcing product. There's the travations, sometimes you can run an observation of rainfall through a model and compare to observations or stream flow, but we don't have rain gauges sitting everywhere. But there is, as most of you know, reanalysis products or climate projection products that also include precipitation that we can use as forcing for these models, especially these climate projections or weather forecasts, we can use them to run through these models and then forecast, will we get a flood somewhere? Will we get a drought? But then which of these forcing products, because there are many of them, what you use, if you were to do some kind of experiment like this, first thing you will basically need to do is, okay, for example, am I going to use seraphite? Am I going to use your anterine? Which already gives you two hydrographs. And if you're a good researcher ethically, you should also question, is that model that I've been using, is that the right model to use? Because there are many hydrological models available. So you would also want to check, okay, I've done some research with this model, I checked the different forcings, maybe I should check if the model that this other research group has performed similarly, which leads to an explosion of different combinations of experiments that you can do with models and different forcings. Then talking about these models of these different other groups, why don't we do that? Because we don't. Well, basically the reason for that, this is what you get if you do an image search on hydrologists. And our image is pretty good, we are the ones that get to go outside and stand in streams and do nice measurements. But as most earth sciences know, if you're a lucky, this is like 5% of your time. And then the other 95% of your time is just sitting in a cubicle behind the desk, processing the data you got from the field or often processing the data someone else got from the field or some satellite observed from the field. And when we're processing that, we're generating these models. We're doing that in computer covering with computer languages, programming languages. And I can assure you that the choice of language that you choose is dependent on two things. Which institute you did your grad school and how old you are. And it's not dependent on is this the right programming language for the type of application and writing. And to a degree, that's okay. All of these things are sharing completes, you can do programming in all of them. And if you're comfortable, okay, model up is not okay. Other than that, if you're comfortable in R or Fortran, you should totally do that. Everybody has to use Python. But it does mean that our knowledge on all of these different processes and all of these different regions gets encoded in different programming languages that not every member of our community can understand or wants to understand. And that's the main headache that we have when dealing with different programs, different models within hydrology. And I think it's fair to say that this is a widely recognized problem in earth sciences in general. And what we want to do with the water cycle is we want to provide a platform and a set of tools that takes away this headache and allows hydrologists to focus on running each other's models, working together, coupling each other's models and focusing on the discussion on the hydrology instead of the computer science. And to do that, we have a team of scientists from Delta University of Technology and research software engineers from the Netherlands eScience Center who work together on building this platform. So that basically we're dealing with computer science headaches so that you don't have to. It sounds like a sales speech. The way we solved that and this is where BMI comes in is by looking at what already exists, we're not going to reinvent the wheel completely. And BMI is a really good interface in our opinion because it's so generic which means that it can have most of the models running. The problem that we encountered with BMI is if a model has a BMI interface but is not in the language you support, you would still need to install dependencies and the right FORTRAN version or the right CDO library that it's calling. We want to take that headache away and we solve that by basically packaging the models in containers, either docker containers or singularity containers if you're working on performance infrastructure. And then you communicate from this container to a central place, a central Jupyter Hub where you do your experiment through what we call GRPC for BMI. And it's basically we took BMI and smashed it together with GRPC, Google's remote procedure call, which is a piece of technology that allows the translation function calls between different programming languages. And in that way you can have an experiment written down in the Jupyter notebook in Python talking to a model running in a container that just runs in FORTRAN. And as the one that's designing the experiment, you do not need to look at the FORTRAN code. And the other way around, if you're the modeler that wants to contribute the model, you only need to work in FORTRAN, add BMI to your FORTRAN model, wrap it in a container and it's usable by people on the platform. And that also means that it becomes fairly straightforward to then couple models or compare models even if they're in different programming languages. Because you just start up two containers, you don't know what language you're talking to. The philosophy we have in how we want to get this out to the community is basically this diagram. This is how we think most scientists work. We want to give you a way to explore the different data sets and models that are available. And then based on that, make a choice which model am I going to use with which data set. And then you want to do an experiment with it, run the model a couple of times, couple it to another model. You probably know what kind of experiment you have in mind. And then you want to analyze whatever outputs generated, publish it, get academic credit, become a professor. But to do that final step, you need to share your results. And if you improved in some way the model or made a data set that is of interest to the community, we also want to make sure that there's a curate step where that gets back into our platform. What we choose here, if you build a platform, you usually have two ends of the spectrum that you usually either people start building a GUI and then the GUI will have four buttons. You give it to your users and every user will say, I'm going to need a fifth button. And every user will want a different fifth button. And the other end is you just build a software stack and you say, oh, it's simple. Just do gundapip install software and then it works. And then your user will be looking at a blinking cursor and thinking where do I start? So what we build here sits in between that. Whereas we have an explorer where you can explore all the data and models that are available on the platform. And I'll show you that in a minute. If you then start an experiment to be explorer, it will generate a notebook where your choices are put into the notebook and that notebook will always give you your first hydrograph. So you have a working notebook that already works with the model of your choice. And then you can work in the Jupyter environment to change that into whatever experiment you want to do. If you've done it five times, you probably don't need that step anymore. You can just start from scratch. But this way we can help people get started in this platform. I mean, I'm saying platform, basically it's a software stack. And you can run it on different infrastructures in the Netherlands. We're running it on the server research cloud, which means that anyone that has a Dutch research account can just log in, start up their own machine. They would get a URL, go to the URL. Some server in Amsterdam is running that. But if you don't have that, but you have your own infrastructure, you can install the entire stack on it. This is something that usually a system administrator would do. And then they would give access to that, to the hydrologists and the earth scientists in their group. We've made sure that the sub-components that you need to do these kind of things are clearly identifiable within our package. And I want to show this slide for two reasons. One, to show how it all interconnects We have a subset for forcing, we have a subset for models. But two, I want to show all these logos because the design philosophy we have, and this is also why we use BMI, is the first look for existing things that are 80 to 90% what we want. And then use that instead of start from scratch and build something new. So in our model part, we use GRPC, BMI and containers. In our forcing part, we use Isenvalto to make standardized forcing for all of the models that we support. And Isenvalto is a piece of software that was already developed in the atmospheric sciences community that turned out to be 80% of what we need. This also makes sure that if we improve something in that, we just put that back in their repos and in that way help the entire community work. Okay, well, this is all nice and images and graphs. But of course, you want to see what it looks like for real. So that is my background, those are the climate stripes of the Netherlands. This is a browser. I double check, Lynn, I'm still sharing this, you can still see this, right? So what I have here, I'll close this one. This is where we start, normally. And then we can explore all the different models that we have. So I have two very extreme cases within the modeling community. Marmot treats the entire catchment just as one bucket. Rainfall comes in, evaporation goes out, streamflow goes out, that's it. And it's always nice to have that as a kind of benchmark. If your model is outperformed by that, you're probably should think about what's going on in your model. And it happens, people make very high complicated models where every cent of grain is modeled when they run it. And Marmot predicts streamflow better. We should think about why that is. But then P-sharp loop is developed by Utrecht, W-flow is developed by Hiltaris. And just as a quick example, I've added the 30-minute P-sharp loop to the map. I could do the five-minute, this is a higher resolution. But this is good because it's fast. So here you can see the model name and which variety we're looking at, different and starting experiments. It will generate that notebook that I talked about that will get you your first, your first hydrograph. You can also see here in the jupyter hop, for those that are experienced with that, that we create some kind of string that's perfectly unique in your home directory to make sure we don't mess up. If I would run this from start to bottom, it would import our library and then set up. And the thing I like about BMI is we just have a call here recruiting model object. And as soon as we have that, we can ask what parameters are available, etc. And people are familiar with BMI in all of this. And I love the part where you just say this while the model time is not there, time to update, get a variable. And these coordinates, these coordinates are on one. So this is not running. I'll come back to this once it's done. I'm going to show you a few other cases because just in a hydrograph, it's a big tutorial, but it's not research. So I have a few example notebooks, case studies, as we call them. I think, yes. So a very simple example would be two models, same forcing. I want to know which one performs best as a stream flow prediction. I'm using list floods, which is an operational model and W flow. And what I want to stress here is when I'm running, this is W flow. And it has the very simple while time is not time to update, get this charge. And then here we have list floods, I've set up a list of model. And the running of it is exactly the same. And I think that's beautiful. And then you can go and rebuild functionality and tools where hydrograph is just a function. And you just give it different outputs, data sets, because this is something that hydrologists need so much. So that's part of the analysis sub-package of our thing. So you can see that both models actually perform pretty well. You can give you the standards, metrics that hydrologists like to look at clean, good mesh up with efficiency. Why use metrics from other fields if you can invent your own. So that's a very, well, I would say fairly straightforward, simple use case. Somewhat more involved. What I like about BMI is how you can interfere with the state of a model during runtime. And you can use this for coupling, you can use this for experiments. What if I do that? And in this example, and this is done by the stress test as a thesis project by a bachelor student. And in this example, we thought, okay, normally our models, they take in temperature and appreciation and sometimes wind and they use it to calculate evaporation as part of the water cycle. What if instead of that, we use an observation and force the model with that observation? But this model is not ready made for that. So what the student did is he starts a model of it such as this little standard. And then he writes this function. Apply flux net correction. Flux net is the evaporation observation product we're using. And he basically takes a model in as an object as an input and an observation of the thing. And then what he said here is, okay, model, what is the evaporation if you calculate it? And he said, okay, but the real evaporation is this. So I'm going to adjust that. But to adjust that, I need to adjust more than just the evaporation because that's a flux. I need to adjust the soil storage or the amount of water in the top soil. And depending on what that soil storage was and what my observation was, I may need to adjust the amount of water in the channel. And so he can write all that logic in a function that's really clear to read by other hydrologists. So this makes the experiment really visible. And then at the end of this function, he sets the value of his calculated new channel storage and his calculated new soil storage. And then you can just run that experiment. Here we have a reference model that he runs without interfering. And below that we'll have a experiment model where he just does experiment with updates. But after doing experiments and update, he does apply flux net correction on this experiment. And I think this really clearly shows that using BMI and using this platform, you can interfere with that state and do experiments. But you have a very clear separation between the model that you're basically not touching as a code base and the experiment. Results are funny. He does it for the Merrimack. And so we're looking at an observation here. I think it's close to Boston. My geography is correct. And actually your predictions of streamflow become worse if you force it with observations. And that's partially because we're just using one flux net tower for the entire catchment, which is probably not right. And it's also partially maybe because we build our model and there's either explicit or sometimes implicit calibration where sometimes you correct in some process because you know it's not perfectly correct, etc. And so this kind of studies bring that to bear and it helps us to get a discussion on the actual score, which is what I like about using this as a platform. So those are two examples. I got more of a time afterwards, but I'm going to go back to my presentation to finish up a bit. There we are. These are the two slides for if the demo doesn't work, but it perfectly worked. With these results. Of course, we're platform builders, but a platform is only useful if it's used. And so we want other people to use it, but we're using it for our own science as well within the group that we have in Delft. And I got two examples of what we do. This is I think it's nearly accepted and has. So like we're in the final stage of review where we're minor efficiency. And this is by a master student and by years, now he's now a PhD in Lausanne. And what he looked at is, can you take Pischer Globe, the Utrecht model and then cut out the glacier components and replace that with a glacier model from the glaciology community who have way better glacier components than like the one parameter thing that we do as hydrologists. We spend a whole lot of time on getting the soil right and getting the trees right and glaciers. How hard can it be? And people that study glaciers their entire life would be, well, so what he did is he exchanged that and then he looks at the difference if he doesn't does not do that for a whole lot of catchments. The more blue the color, the bigger the fraction of glacier melt to stream flow is within the catchment that he's studying. And the graph basically shows the difference between not using glacier knowledge and using glacier knowledge. You can see it makes a huge difference during melt season, of course. And we're happy and so he does that for a whole lot of catchments spread out over the world. And one, I was a really good student. He went on to do a PhD in Lausanne for a reason. But two, having a master's student be able to do this amount of work within a master thesis project is facilitated by the fact that this platform does what it does. And then another one, some of you might know, your own your own arch is a PhD with us has done tremendous work in helping build the platform, but he's a PhD in hydrology. So he's also using it at the same time to do some study. And what he's interested in is what is now whether for distributed models or the ones that use grid cells, what's the optimal grid size. And when we see a lot in our field, and I'm not sure if that's happening in other fields of earth sciences as well, is that there's a push to go to hyper resolution or hyper hyper resolution. It's like a scooter soul. And then but we don't change the mathematical formulation of the processes we study. And at some point, you're going to like shift into different physics realms. So he just said, okay, look, if I average out everything on a 50 kilometer roster, it's going to be different when I'm looking at a 200 meter roster. So he took W flow and ran it for the entire chemicals data set. So these are about 600 catchments in the continental use a and he used the W flow model because you can very easily say, okay, now run it at a different resolution. And you have to read his paper is now in his body basically showed is that for a lot of catchments, it doesn't get better if you get to a higher resolution. Whereas intrinsically, we assume higher resolution is better. So it must be closer to the truth. Apparently it's not. So that's the kind of research that we like to do. And if this is the kind of research that you want to do, or if you're thinking, I want to get over the headache of having to install dependencies every time I want to use someone else's model, be in touch. I think we've made this platform in such a way that is easily extendable. It's now really geared towards the hydrological community. But I think the basic underlying principle is extendable to wider earth sciences community. So that was my talk. I had questions from the room before we start with the online. Thank you. Yeah, so I don't know how good this power is. So I'm going to repeat the question for the people that are listening online. And the question is there was a sharing step in our vision. And what does the motorcycle do to facilitate that? Or do you use our platform? It's a really good question. What we now do currently is we provide best practices, where we say, if you have a data set, export it to this repository in that way, depending on how big your data set is, we suggest different repositories. If you have figures, put it on fixed share. But mainly, I think the most important thing that we do is the Jupyter Hub basically supports Git integration. So as soon as you're working in the Jupyter Hub, you can say, make a repo out of this or push this to this repo. And then the standard way of sharing research results through repositories is what we advocate. So once you're happy with what you've analyzed, your notebook is run and creates nice graphs. You push it to GitHub. On GitHub, you create a release of your repo. So it has a real good version in timestamp. And then you make sure that you get a DOI for that on Sonodo, which you can automatically link to your GitHub repo. That's kind of the workflow that we advocate. I hope in the future to do that with one or two buttons on our own, maybe commands or buttons on our own platform. Thank you. I'm curious when you're packaging up those container models, how much work goes into like taking errors and putting them into the Jupyter Hub environment? I was just thinking if you are linking with like, you know, interfering with the flux data with observations and you run into an error in the model, how does that communicate into the Jupyter interface? Okay. So the question is if you're working with a piece of software that runs in a container and for some reason you're encountering an error, how do you get that information back, right? Yeah. Okay. GRPC for BMI pipes errors. However, it adds its own information back and forth, which is not helpful. And Python is really bad at error handling, which is effective life. But basically, so if something throws an exception in the model, that exception gets piped back to GRPC for BMI and that will throw that exception in your face. I've seen a lot of those during developing this. I think that Python in general and our platform in particular can do a better job at making human readable exceptions. And I think that especially on the BMI layer, you should enforce developers to use best practices in coding to throw exceptions. And of course, you'd still have users that would misuse a model in such a way. I mean, if you can set some kind of variable and you just give it a complete wrong type, it will break at some point. But what you want is that a model is properly programmed that it will throw an error that says, hey, I'm expecting this type, I'm getting that one. What usually happens is you get a segmentation fault and a pointer and you have no idea what went wrong. So it's kind of, I would put the onus on the model developer, but we are providing best practices. And I think that BMI helps with enforcing those, especially if you know, I don't know who made that. Maybe someone in the room, but there is this BMI tester, which basically is a piece of software that says, yeah, this is BMI compliant. But that's a good first step. And I was talking with people at NCAR. We want to build a library of tests, which not only do technical tests, is this function implemented, doesn't return the correct type, but also more warning level tests. You're putting so much water in this variable, this is unlikely to work, stuff like that. But yeah, it's going to be a kill seal debugging. But what you basically want is to have models that are good enough to be trusted. Well, what if the error is actually a bug in the program that they're running, but would be the workflow fixing that. Can the user go in and modify a few lines of code because they see what the bug is? In the model. In the model. Like our models sometimes, this is secret, maybe for a lot of people, but they have bugs in it. What? You have bugs? Yeah. Don't tell anyone. So what would be the workflow? Is it easy to get a new version of BMI tester or whatever? Yeah. So the question from Eric is, what would be the workflow if you discover a bug in a model that you're using? We require, and it's more from a philosophical and ethical side than from a technical side, we require that all models are open source and shared, which means that we're just pointing to a repo. And if you find a bug and you just really start digging into their repo, like, oh yeah, this goes wrong because this is just stupid. And you should have seen this one coming. You can do two things. You can either fork their repo, fix it, and then make sure that whenever you're calling the model, you overwrite what container it points to. It does require you to do a docker build on that container. But in theory, you can just start another container that's polished, even if you just build locally. It needs to be tech-savvy enough to do these kind of things. But if you're tech-savvy enough to dive really into that model, you probably are. The proper workflow would then be that once you've done that to, of course, do a pull request on their model to make sure that it gets fed into a new version that gets fixed. And then if we do a new version of their model into our platform that we'll back in, and that's kind of the curate we talked about. Of course, you would also need to shout at the one that originally made the bug. Yeah. And let them know that you've done this thing. I can show you a little bit, I guess, now here. This has run. Nice. So this has run. So that would be the simulated discharge for 10 years of runoff of the run at the point where it ends the Netherlands, using a very coarse 30-minute model. I think I can misuse this. Let me see. So it's working like that. Yeah. So you can ask which version of your model you're running. SEPTARS is a version named Label. We shoot the semantic here. But SEPTARS is a version of PCR globe where we can set some of the states. And normally, this is not supported. So the person that implemented BMI for the first time was like, shit, I have to make this work for comparison and they only implemented updates and get stuff out and they didn't implement set stuff back in. And so we did that and that's this version. But anyway, so you can get different versions of PCR globe. Maybe we have something like ask versions, but I have to look up the syntax for that. So at the top, you requested a particular version? Yeah. So here you request SEPTARS when you started out. Oh, that's good. So for reproducibility, you can always request an older version. Yes, you need. This is yeah. This is yeah. We need that. But you can also request which versions are available. Yeah. And then you can on the forcing in our forcing module, you can you can see which models and which versions of models are compatible with this portion. I have a question in the chat. Mm hmm. Brock Nels Frazier in your example on replacing ET, how do you know as a modeler that the modeling we're trying to modify actually advertises a variable you're trying to set value on? And then there's another comment. How do you throw exceptions from Fortran or C? With a smiley. Yes. Yes. I understand that. As soon as C starts going pointers around, we just go get coffee and cry in court. If well programmed, Fortran and C can throw exceptions and we'll just return something you're not expecting. So we'll return a string instead of a number and a string treat. You did something wrong. But that is only for the kind of exceptions you can expect. If it really does a crash, then it doesn't throw exceptions and that's hard. That's that's a hard problem. Then the question about how would you know if it advertises ET is in BMI and this is something that is mandatory to implement. But the implement in BMI on the model is between 40 and 200 lines of code, but you just add to it. And so you would do model dot get target input names. Get output burnings or something like that. Output. Why doesn't my output be work? Get output. It's hard to solve them like this. So I have to look up with the C. It doesn't have that. But there's a BMI command, something like this that I could look up in the documentation. And it would return a list of the variables that you have specified that are callable. And the same for the set value. In a point on set value, your example is changing soil states which are not likely typically exposed input variables. Yeah. So that's right. It's normally it's a variable that's just included in a calculation and inputs are just precipitation and the outputs are just stream flow. But if you're implementing BMI, you could just say these variables are available in memory. You can set that. But when implementing BMI, you would have to say if the variable with the label ET gets called, you would have to change the internal variable called evapt and put in whatever the argument was as the new argument. And I can show you a pointer to the example where that actually works at least. But anyway, it's get output bar 8. Oh, get output bar and then another underscore and then names. Yeah. I'll look it up. But anyway, so you can get a list of out the far names, you get a list of in the far names, or Edwin just forgot to implement this, which is bad. I think I can do that. And so I think that we would need to establish best practices in model developing. Talked about this yesterday with Eric. If people want to add a model that's an existing model, it involves some working getting BMI on it. If people want to add a new model, please start with an empty container that we have that already has BMI. You just start putting in your update timeline and make sure that all of your state is settable. Anything that you would consider state from a state space point of view, I think should be settable and gettable because that just makes the type of applications that people can do with your model and that is why you're making it makes it so much bigger. And if you shoot that off, it just doesn't work. Anything else from the room? Yeah, maybe it feels like a question super nice, but I'm wondering, so if you go towards the level where you have a nice web browser and improve the model, is there a risk for users to become disconnected from the physical models and like maybe putting in some weird parameters, knowing what they do with different models? And do you have a strategy to deal with that? So the question is, do people get disconnected from the model and just start seeing it as a black box and start doing things that it was never intended to do? That is a risk, but that risk is on the people themselves. And I think one of the main ways to deal with that is through education. We have been given a grant in the Netherlands where we are in the next two years supposed to take all the graduate education on hydrological modeling and import that in your cycle, make it available and then also add features to your motorcycle to allow its use within education. So different login for a teacher and a student connection to this for learning environments like Canvas and Blackboard and Pride Space. And in that way, we can educate the next generation. This is how you deal with models. They could still misuse it. They can do that now as well. The amount of papers that just say, I took a tutorial in machine learning and I blindly applied it. This is an output. Yeah, we had a very big discussion which leads to a paper retraction basically misuse of machine learning model. And it's not pretty, but I don't think that should stop you from sharing the models in the first place. The models should come in well documentation, good documentation that basically says, this is what it's meant to do. If you go outside of these bounds, your mileage may vary. But I would still facilitate it. I mean, I think that new science can emerge if people push things over the edge. Is there a political requirement for writing BMI documentation for the model, like just translating the amount of looking for PDF version 13? Is there like a, with the unification of the model interface, is there also a unification of the documentation? That's a really good question. So the question is, if there's unification of the documentation and the documentation standards per model, and I think the basic answer is no, not yet, but we should have that. Let's write a grant. No, I think that's a really good point. So we're building like empty containers where you can add your model in. You could also build empty documentation where you could just say these elements need to be part of your documentation. And I think it's a really good point. No, well, the idea is not difficult. The execution may have, you just start to start thinking, okay, is this going to be, where do we put things? But I still think if the thing I was thinking about doing is just having a repo that holds these empty containers, and you fork the repo, add your own stuff, and it becomes its own thing, and then you publish that, have that repo also have a empty markdown file, but which does have the required sections. And I think you can even write checks in repos now, where you say, if you wanted to a push or a fork or whatever, we first do these and these and these checks, and then you could do a check on the documentation as well. I've been denied full request because I didn't do the commenting on my Arduino code properly. So yeah, that's a good thing. But we should make sure that that documentation, adding that documentation to the standardization should be granted to you that because it's a good idea. Oh, hi, Rob. So, you know, at CSDMS, we love DEL, we love the eScience Center. Are there ways you can think of off the top of your head that we can collaborate more closely? So for the people because now the sound came from my laptop and there's an owl. And so Mark's question was, are there ways to collaborate more closely with both eScience Center and DELFT as a university? I honestly think financially, it's easier to collaborate with DELFT because the eScience Center, the way they work is you put in a grant and if that gets granted, you don't get money, but you get hours from their research over engineers. And I love that as a way of working because you just get so much expertise on your team. But those grants are only open to Dutch Research Institutes because it's funded by the Dutch National Science Foundation, NWO. And I know for a fact that they're hiring a lot of people because they have more grants to give out than people to execute on them, which I think is a good sign. It means that we need this kind of setup. But it would make it harder to directly collaborate with the NWO Science Center. However, I know also for a fact that they want to have their international connections connected to centers, etc. And I'm now talking for another institute. So make sure you ask the same question to Niels, because I might be overstepping here. Working together with DELFT, just this discussion with people in Alabama, you know, might be this morning, there are not a lot of funding opportunities across Atlantic. And that kind of sucks. And I think the best way to do that is to have one of the two partners get funding for something and then have others as supervisors for PhDs be attached to it. Because that's usually something that your university approves and they can even make a little bit of money for. So if you would have a PhD here or another grad student working on something and you'd say like the supervision team includes people from DELFT, that will be something that we're more than willing to support. But I also think we should push for policies that make cross-Atlantic. And when I say cross-Atlantic, we're reinforcing the dominance of the US and Europe on the global science community. I think we should make it wider than just cross-Atlantic. But it would be nice if there were grants for global corporations between different institutions. But let's all be for that with our politicians. I mean, I don't see anything in the chat. I have a question about model coupling. You said that you could do that. And you showed a beautiful example of how a user could insert themselves in the other model. Do you guys really do model coupling in the platform? And what would you do and say about grids? Two different models operate at different grids. So the question is, what about model coupling, especially grids? I have an example of a downstream coupling. This is one way that doesn't involve grids. But it shows how we would approach that. And that basically says, the experiment we're doing is we're taking Beeshear Globe again for the Rhine again. But what would happen if we just cut out the Moselle sub-basin? The Moselle is a territory that enters the Rhine at Koblenz. We just cut that out and replace it by a one-bucket Marmot model. And the reason we're doing this as an example is that you can see that you can couple a Marmot model in ModLop to a Python model. And so what we do is we have some setting up, of course, which we always have. Then we set up a Marmot model. And we set Beeshear Globe model. And then when we start running it, where is that? Okay, running the experiments. Yeah, so we have an experiment. We have a reference model, which is the non-adapted Beeshear Globe. The way we adapt the Beeshear Globe has a clone map, which is basically a map of pixels which it should and shouldn't, on its own grid, which it should and shouldn't calculate. And this we set all the pixels that are the Moselle days and are now zero. So that's the experimental one. Then what we say here Marmot that Marmot output gets value for, so just get the discharge for Marmot. And then we say, and this kind of reminds me of a real question, how much water should we add to Beeshear Globe? Marmot gives output in millimeters. Beeshear Globe wants, so this channel storage is in cubic liters. So we need to convert that. So we do a conversion here to a amount of water from millimeters. And then we get from Beeshear Globe the current value of water in the channel at co-plants. That's the current value. And the value to set is basically current value plus water to add. And then we do a model, set value, and then we update Beeshear Globe. And I think that if you're working with grids, you get a similar thing. You get one grid and one other grid. So you should have a functionality that translates one to the other. And we don't necessarily provide the tooling for that, but there is tons of libraries to do. And I think with get grid information, and those kinds of functions in BMI, you can very like automatically say, okay, get great information, what kind of grid are you? Okay, if you're this kind of grid, I need to do this kind of transformation to be able to put it in the other model. So that is something else to do. The funny thing, if you take a look at the results, so this is the distributed result of Beeshear Globe running. And then here you see what happens if you take the Moselle, the Moselle S1 bucket, is the orange one and the normal one is the green one. You can see they're very close to each other. And both not as close to the observations that you want to. So there's a lot of hydrology improvement that we can still do with these models. If replacing a whole sub basin with one bucket doesn't make a lot of change. It also shows that this is funny. I think this is funny. What makes hydrology different from atmospheric sciences from a data simulation and chaos point of view? The atmosphere as a system is naturally chaotic and unstable. So you would get that divergence at some point. An hydrological model is naturally convergent. So differences that happen upstream will smooth out in the end. And so that we have a lot of problems with mobile collapse. Well, ensemble collapse, if we're looking at our data simulation problems, because all our ensemble members started going towards the same solution. Whereas in atmospheric sizes, you have like all the ensemble members just going to go all over the place. Anyway, that's hydrology. This is downstream coupling. If you want to do tight coupling, like within time step coupling two ways, you need to think really careful if you want to do just I'm exchanging this from you and then that from you and then we'll do another time step already. If you need an iteration there, then when we talked about this yesterday one-on-one, what you would need as a function is the ability to set time back in a BMI models. And for that, what you basically want is to have a set state and get state, which is currently not supported in BMI or which I'm going to push forward to have really soon because that would open up a whole lot of research possibilities. But it would require modelers to think from a state space perspective on their models of what are the variables that are part of the space that we need to calculate the next step. And it's not necessarily how hydrologists and environmental engineers are brought up. For quite some kind of mindset. Cool. Thank you, Ralph. Are there any more questions? No, I think that's it. I think we're at the top of our house. We are. Thank you, Ralph. Thank you so much for being here. Yeah, so for everyone still online, I'm going to be at least in the Boulder area until Saturday morning. So just shoot me an email if you want to follow up on anything that we discussed. But thank you for being here today.