 Okay, hello everyone. I'm Mustafa. Again, I will introduce myself. I am Mustafa Ferdag from Egypt and now I'm working as an early stage researcher in the European project in Germany, the University of Potsdam, or especially the Helmholtz Center in Berlin, in Potsdam, and what we do is larger scale flood risk modeling. And what I'm doing specifically is that I'm building hydrodynamic models to stimulate floods in all of Germany. And what I'm going to present now was part of my work, was part of my master's thesis with Dr. Gerald Korzon, Professor Demetri Szilmatin in IHE two years ago. And it's not directly in machine learning, but it is in the way that how the resemblance with machine learning is the way how we obtain the parameters for hydrological model as we obtain it in machine learning or neural network. So at the beginning, I will start talking about the conceptual models that we use. So for conceptual models, it provides a simplification of reality, and because of that, it only simulates a small range of hydrological responses. So for the hydrograph, it's consisted of a lot of responses of the catchment, and calibrating the model with one objective function is going to focus mainly on one part of the hydrograph. And if you want to simulate it totally with a high accuracy, there will be a limitation of calibrating the model with one objective function. So with different objective functions, you will have different hydrographs. And by addressing this limitation in the hydrological models, we proposed a fuzzy committee concept to calibrate the model based on not only one objective function, but based on a lot of multi-objective functions. So as Gerald mentioned before, we can cluster high flow or we can divide the hydrograph instead of one hydrograph, we can divide it or cluster it into two flow regimes, high flow and low flow. And here in our method, we didn't make a hard partitioning, so we didn't split the hydrograph into two parts, but we made the whole hydrograph that can be simulated with two models. And we built those two models by multi-objective optimization, and in this case we used the non-dominated sort of genetic algorithm, NSCA2, and with two objective functions. And those two objective functions are root minus square with two weights, and these weights are based on some functions. And by doing this calibration, this multi-objective calibration, we have two models. One simulates the peaks, and that's what we're interested in when we are modeling floods, and the other model simulates the low flow values in order to simulate droughts. And after we have two models or whatever we use multi-objective, how many objectives we use, we can have more than two models, but here we only classify the flow into two models only, so we have two models. After that, the problems come to how can we combine, so we have now two hydrographs, and we only need one hydrograph to be as close as possible to the observations. And so that's the second part of the methods, which is combining these two hydrographs based on membership functions. So for the first part, in the multi-objective optimization, so we have two objective functions, which is the root minus square error, and by adding a weight to each of these objectives. And these weights are based on those weighting screens, which gives a weight to all the values of the simulated hydrograph based on the relative value of the discharge to the maximum value. And after calibrating the model based on those two objectives, we come to the problem of how can we combine both models in order to have one hydrograph by using this membership function. And here it is a trapezoidal function, which gives, this is the low flow model and this is the high flow model, so if the flow is lower than a certain value of the discharge, the low flow model has a whole weight. And if the model is higher than, if the discharge is higher than a certain value, the high flow model has a whole weight. So for this peak or discharge higher than this value, we are only going to consider the flow from the high flow model, and if the discharge is lower than this value, we are going to consider the flow from the low flow model. And the result, and in between, both models are going to be considered with a certain weight, and that's this area in the medium flow regime. And yeah, the model structure that we have used is HPV model. So it is lumped conceptual model, and it is based on Lindstrom 1997. It's a very simple model we scripted in Python, and in order to have it as a distributed, we have implemented a roster-based distributed model where each cell is lumped conceptual model, and the discharge is routed directly with a max pass function to the outlet. So the total hydrographic outlet is the summation of all the discharge from all of these cells. And the case studies that we have used, it is Hebo River in El Salvador. It's 432 square kilometers, and it has a big lake which influences the base flow for the hydrograph, and we simulated the lake separately as a subcatchment in the HPV model. And the data that we used was hourly data, hourly temperature, precipitation temperature, and we have also hourly discharge at the outlet of the catchment. And to test our physical metric concept, how much it improves the model. Here as you see, we have the lake, which is simulated as one model, and the part of the catchment we were trying to explore how much the resolution of the representation is going to affect the performance. So first we considered this whole area as one model, and then we tried to use the after-distributed model with different resolutions. So once we used four kilometers, each cell is four kilometers, and two kilometers, and one kilometer, and 500 meters. And by doing that we were increasing the number of models. So for a single model we have one model for the whole catchment, and the complexity of the system increases a lot when we consider every single cell as a separate model. And that's the case here with its derivative. So this part of the catchment is represented by 32 model. Yeah, and for the result, yeah, here, here, the figures shows in the x-axis here, these are some of the models, and the complexity increased from single models. So this is one model, as I showed here, and the number of model increases, it reaches 32 at this model. So as you see here, and that's the error. So the error here was very high, and it improves more when you go to, this is the committee model, which is using this fuzzy committee model to improve the simulation of the hydrograph. As you see here, the performance improved a lot, and all of this committee model has much better performance than using one, only one model. And yeah, as you see also, that performance did not increase as much as we wanted to be when we made the system is very complex and using 32 models. So the complexity of here was 16 models, 32 and 2 models. So by increasing the complexity of the system, we didn't gain that much. And that's the model that has the best performance. And yeah, so also we, in order to assess how good is all of these models, we use different metrics. So this is root minus square and with this special ways to focus on low flow and high flow, and that's a nice talk with. So as you see here, this is this metrics, the root minus square, the lower, the better. So here at the lower axis, these are the different, the low value of root minus square high, and this is the low value of root minus square. So this is normalized in order to present all the metrics in one graph, and that's the maximum value. And for Nash subcliff, the minimum is up here and the lower is up here. So yeah, to make all the good models in one side, so these are the good models. So it has low value of root minus square and high value of Nash. So, and yeah, by grouping all of these models, we noticed that all of these models are this committee models. So those are, are the ones that has the fuzzy committee as a concept to develop them. So those models are here. So all the committee models are much, much better than all the single models, which are those models. So, and yeah, using this multi-objective optimization, it, it, it results in a variety of set of models. So all of these are all every single dot of these models is a result of the multi-objective optimization. And all of these models are good. This ends the front are good. And by, by using the multi-objective optimization, then combining this best model for low flow and combining this best model of high flow using the committee model, using these different scenarios of committee models, we have those committee models here that are better than every single model. So, yeah, the result at the end, we, we, we concluded that using this committee model to combine this model and this model will result in a better model than any single model. So the best model was, was this one, which is lumped, not distributed and using those two scenarios of membership function A and winning scheme. So, and yeah, for, for, for other models, for other models, as you see here, like all of them has a very good performance. But the observations that we got, increasing the combo list and using 32 model with each cell is, is, is not as good as we thought it would be. So using, using a distributed model might be beneficial, but don't increase the combo list of, of your system. And yes, so that's the end of my presentation. So I would be happy if anyone has a question. Thanks a lot, Mustafa. Thanks for your presentation. We have a few questions for you that I'll just put up on your screen. So now people, you can continue sending your questions and comments to Mustafa as he answers this first one. Question from Nabil Khurchani is how did you combine HPV light model with Python? Yeah, so at the beginning I had it in MATLAB and I, I wrote all of it in Python and actually with Gerald, we developed farsers as HPV and we made it as a distributed model. And now it is in GitHub, so it is publicly available for free access for anyone. And yeah, I guess I will write my GitHub so you can, you can access this. It's freely available in, in, in Python and also in MATLAB in, in, in GitHub. Okay, fantastic. As I put the, as, as I put for you the next question, could you please type the link to the GitHub profile of yours that so people can access it, they might be interested. Okay. All right. The next question is from Seymour Shavalking, who asks, says, very interesting, could you please provide some examples of practical applications of the model as you visualize part from what you started out with the practical application. So that's a practical application what I showed about this hippo river. So the main purpose of the model is that they wanted an operational model. And Seymour is very small, so the calibration, calibration with single model, they didn't, they didn't give a, like, desired results. And that's why we were at the beginning exploring which methods help to improve the simulation of the hydrograph. So, yeah, this case studies is, is a practical application. What's also part of the question as you visualize. So what you meant to us was, you know, practical applications of the model, apart from the one that you kind of started out with the parts of the one that you, you know, do you see some other practical applications of it. Do you think it's, for example, do you think it's relevant, it would be relevant in some other regions as well. So that's not, not only application that has been implemented using this method. So it was originally started by Dimitri so my professor and any of the the implemented in other management in Luxembourg and Italy in in Europe. So yeah, if you just search it and for the publication of the machine or the method itself, you will find like two or three papers talking about other catchments. We have received a question from on Twitter, which is not about your particular presentation per se but it's a more fundamental question. And I'm really hoping you could respond to this. I've seen advantages of artificial intelligence or machine learning over simple data modeling or analysis, like why make the machine learn. Yeah, so that's a hard question and it mainly depends on the purpose of the model so you cannot say like this is better than this and usually people go for machine learning models because it is faster and but it requires more data and in some applications, you cannot use it because it doesn't have a physical meaning. So it just you obtain it from from the calibration of or from the optimization. It's a black box. It doesn't have any physical means at the background. For example, for for some of the one of the questions asking like about flood mitigation or scenarios about land use change and for for this application you are like forced to use physical based models. So it mainly depends on the purpose why you are building this model. Daniel, if you're still connected with us, could you could I please ask you to, you know, come in and respond to this question because it is like a fundamental question and not only specific to the last presentation. If you're still with us, Daniel. Okay, so I will try to answer. Then I agree with Mustafa. It depends on on the specific practical case. And also maybe models that are based on a and m sort of other kind of inter artificial intelligence, they can be suitable for very complex processes that cannot be modeled with with other models. So, as it was said before, the AI model it only analyzes inputs and outputs. So the complex process is hidden inside of a black box and black box and it might be suitable for what is very complex processes. Thank you so much, Daniel, and also Mustafa, I'm afraid we'll have to end there because we're already 17 minutes over time. Thanks everyone. Thanks, especially Daniel Mustafa and of course, Jen, who has now logged out. Thanks to you all the audience. First of all, for, you know, turning up in good numbers. The recording of today's webinar will be available later today on the water channel at the water channel dot TV slash webinars. Let me put up the link over here. Also the website and newsletters and other channels of communication that you're well familiar with. Let me put up one of them over here. And yeah, for now I would just like to say thanks again and goodbye and see you at the next webinar. Thanks a lot guys. Thank you.