 OK, so thank you very much, Abraham. I understand that the topic of this webinar is about artificial intelligence on water resources. So maybe my suggestion is to add one previous word and call it ICT and artificial intelligence on water resources, because the presentation that I will be showing is not about artificial intelligence, but it's about the use of information and communication technologies to make use of several tools in order to provide assistance to the management of water resources in general. So from the top, then I am Daniel Vasquez, as I said before. I am working in Itaipu. That is a major hydropower plant inside of Itaipu. We have a research center called International Center on Hydroinformatics. And at the moment, I am coordinating this center. So today in this webinar, I will show you one of our previous projects that is about an information system for forecast. This project started maybe two years ago, and it's already finished in its first phase. So during this first phase, we aim to make use of available resources. So by this, I mean that we wanted to make use of available models that were over many institutions and universities, available data, and also available technical resources in order to build a tool that can provide support to the decision-making process in issues related to floods in general for our country. So at that time, we have identified the following problems. So this happens across many countries. The availability of hydrological data and its accessibility are important limitations at the national level. Also, the little information that we can find is very dispersed among many institutions. And there is no, at least in our country, there is no tool that can provide efficient access to this data and information. And also, of course, the main problems are the floods. So we wanted to provide a tool that can help to the process of reducing the impacts of floods in our country. So to try to tackle these problems, we came up with this objective that was to develop an information system for flow forecasting for the entire country with a view for its future implementation in the corresponding institutions. Then, the specific objectives were first to centralize and facilitate access to hydrological data and information for its various uses, and second one to cooperate with the universities for the construction and provision of modeling tools for the main rivers of the country, and third, to make available a platform that will offer the services of a flood early warning system. So during the planning process of this project, we came up with this operating diagram. So as you can see, in the entire country, in the entire process, we have four main components. The first one is related to data in general. So there is data in many institutions. Also, this data, they come in different formats. They come in different time steps. They come in different sizes in different types of data, such as geospatial, hydrological, and meteorological. So there is a big amount of data that we are collecting from all these sources. The second component is more related to the process in general. For this, we use a programming language that is Python in order to make the data extractions from all these sources to develop the scripts to convert from the different formats to convenient ones, also to manipulate geospatial data and to make many types of calculations. We also used Python in order to take control of different models that were provided for this project. And these were a semi-distributed hydrological model and also a hydrodynamic model that simulates the flow for our main river in the country. In this same component, we made use of free GIS software in order to inspect all the information that is being collected. So as you can see in these two components, in the first one, a big volume of data is being captured. And in the second component, a big volume of information is being generated by the models. So we needed to get infrastructure in order to allocate all this data information. So we built, we had a dedicated server in order to storage all the information. We also had a dedicated server in order to run the models in an operational fashion and also a dedicated server in order to handle the map services as well as the app is that is going to be that was shown in our web platform. So in the last component is the part where we show the front end of the product. So we built a web page where we show all the components that you see here. You are able to see the data. You are able to see the process of the models. And you are able to download all this information in this platform. So very quickly, I listed here some of the functionalities that the application provides. So first, there is an interface that allows to centralize the hydrological data of the country that is coming from all these institutions that I showed you before. There is another interface in order to show in real time remote sensing precipitation from the GPM project. Also another interface to see the forecast coming from the European center or medium range weather forecast. The variables that we are capturing are precipitation, soil, moisture, temperature, and wind figures. And last, we connected to the OpenStreetMap project in order to access to cartographic data and in order to feed our basemaps. So all this information is being managed by a special data infrastructure called a GeoNote. And this web-based software is in charge of merging all this cartographic and geospatial information. And it also provides the services to access to the data and information that is being generated. Then a lot of, as I told you before, we use Python to make the scripts in order to take control of the models, the hydrological model, the hydrodynamic one, and also made the scripts in order to couple them together and to run them in a daily basis. Also, a lot of scripting was needed in order to perform multi-carload simulation in order to provide a holistic forecast of flow and water levels. Then another gadget we try to include is these on-the-fly statistics. This is taking the real-time precipitation and is calculating these statistics by districts, cities, and basins of the entire country. So each time that we receive information, this is being calculated automatically and you can see it through the interface. And the last one, we try to encapsulate all this information in a very simple manner in order to be able to take decisions with it. So control panels were developed and these control panels, they are able to trigger notifications when needed. So this is very quickly how our web platform looks like. This is one of the interfaces. So we focus on the usability and we try to make the interface very friendly. So in here, for instance, you can access to one specific station. You can see the records of water level, precipitation, and other relevant information. Also, the interface that I told you before, this is a one-show in real-time precipitation from the GPM project. This is being updated each 30 minutes with a delay of six hours. It is the interface where we capture the forecast from the European model. And then in the second and third component, what we do is so we make use of this information and we do it with available models. So here we don't have an artificial intelligence and nature model, but we have conceptual types of models. So on one hand, we have a semi-distributed hydrological model and on the other one, we have a hydrodynamic one that covers the entire river. So we didn't build these models. What we did is we contact with our partners from the Catholic University of Assuncion and we introduce these models to our web platform infrastructure. We develop the scripts in order to couple them as I told you before and then to run them in a daily basis. So in this interface, you are allowed to ask for a lot of types of results. You can crawl with your mouse and go to each basin, each node, each reach and each cross-section and ask for the results that the model provides. As you can see, the interface is very similar with the native software that were used to model these processes that were HKMS and HGAS. So when you access to any of these points, you will get the results of the model that you can see in this chart in the green line, you can see the historic data, the historic simulation and then in the blue line, you see the focus points. So overflow, for precipitation loss, base flow, infiltration and all the parameters that the hydrological model provides. The same thing for the hydrodynamic model. So you are able to navigate across the entire river and ask for the information in each station and you will get the water level and the stretches. So there are other ways to show the same information and results and we try to do it with flood maps. So the results from the hydrodynamic model are plotted in the app and then we contrast this information with available cartographic data coming from sensors or from the OpenStreetMap project. So here what we can do is to generate these on-the-fly calculations where you are able to see how many buildings, houses, schools, parks are being affected by each time step of the model simulation. Also, we explored a little bit with Monte Carlo simulation and we were able to generate probabilistic forecast in flow charts and also in maps. And this is the control panel that I told you before. So until now we have seen the technical information but we also need to provide this kind of information in a simpler way. So for this we have these control panels where you can see with these red dots the places that you are expecting to have problems and in the control panel you have just general descriptions and the sites where you are going to have these slots. So when this happens you can trigger notifications via SMS or email to a relevant local authority. This is the same thing just shown by an animation as you can see we focus a lot in the usability so you are able to go through all the information that is being generated to consult even in the maps or with your mouse you go anywhere in the country you click it and you will have the results that are being generated in that particular place. I think I'm running out of time so very quickly we try to make assessments in the data quantity these with automatic scripts as well and all the information that we generated are being managed by these special data infrastructure that I told you before. The name is as you know these are web based so it's a very clean way to access to all the shape files, rosters and even the vector values that are being generated with all the metadata and all the standards that are forced by these tools. So the purpose was to make the information available for the public as well and to facilitate the acquisition of this information. For this we use the WMS service that allows you connect from any GIS software to our database so you will be able to access very quickly to this information that I showed you before. So I guess this is my last slide some ideas and next steps. So of course we couldn't cover everything that we wanted during this project so in the second phase we can try to work with data standards such as WaterML in order to facilitate the intercommunication among databases from many institutions. Also during the project we find out that there is this weird technologies the meaning is business intelligence and reporting tools so this really facilitates the management of data you can generate and you can customize your plots and tables and charts in a very easy way. You have free software such as Metabase and Tableau I guess it's not free. And also very important the next step is to work in the cooperation agreement with universities and institutions in the water sector or the operational implementation of these tools. So that's it. I wish that this project this is of your interest and if you have any question I am here to answer them. Thank you very much. Thanks a lot Dhania. We do have a lot of questions. There are some questions that are directed specifically at you there are some questions that are more general questions that I would like to put to both you and also Mustafa. So now some questions directed specifically at you. The first one from Surafel is in what's resolutions are the ECMWF precipitation and temperature forecast data sets? Second question is how long is the forecast length? Five, seven, 10 or 14 days. And the third question is if they're freely available how to access them as raw data to use in flight forecasting. Would you like to respond to that? Yes, sure. For the European Center forecast the spatial resolution I believe is 10 kilometers and we are receiving the data that is freely available. So it's with a horizon of seven days. And yes, they are freely available until some extent and if you pay you get real time data we're actually dating data with some delay. And if you pay you can also get ensembles and other kind of products. Okay, the next question directed specifically specifically at you is from LaSanta who asks what is the software used for geospatial data inspection? Okay, this is an interesting question because in the center we develop these tools and we try to implement them in the institution. So we focus a lot on free software. So for this particular case we use a QGIS and also in Python you have the ability to manipulate this kind of information but in general terms we use QGIS. Okay, the next and last question before we move on to the next presentation is from our Gopah Kumar who asks is the role of different associated institutions to provide data or are they also involved in running the models and providing results to the interface providing the information? So in our particular case our partners were the academia so the one university also the national hydrological and weather service and us. So the universities they already had this model so they provided the model and the national service provided the data and other types of data are freely available in the internet and we developed the platform and all the processes that you saw in the diagram that I showed you before. Okay, sorry, there's one more question which is specifically directed at you if you could quickly respond to this. Sorry about that, Mustafa. Tatiana asks how have been the inter-institutional cooperation around this project? How did you get homogenization to all inputs and variables? You mentioned something about conveniences, can you expand a bit more about this? Okay, so the inter-institution collaborations around these projects were actually at this first phase it was a little bit informal and this was because it's very hard to propose a project to some authorities if you don't have like a prototype, you know? So it's very hard to convince them if you don't have this. So we got together with some technicians from these institutions and we built this prototype in an informal way so as to have this product in a short time of period and with that we are trying to trigger a second phase where we're getting funds in order to make all the processes match robust and to include even more information and include more institutions but that's the way that we dealt with this project. And I don't remember talking about conveniences I don't know what they mean by that.