 It is an interesting topic whatever we have seen in the morning that is that was very simple equation, just f of x equal to x square minus x only one constraint that is it is not a constraint it is only a availability, right. So, here we have applied the same technique with little modification for reservoir operation. First I will take about single reservoir operation then I will tell how we have extended this for multi reservoir operation. The inflow into the reservoir is the fundamental requirement to develop optimal operational policies. The inflow is considered as stochastic variable either implicit stochastic dynamic programming or explicit stochastic dynamic programming. So, in this study we tried to model a reservoir operation single reservoir as well as multi reservoir operation using explicit stochastic dynamic programming and then we compare the result with the very simple randomized robust research namely genetic algorithm. So, it is four different work we will see that. So, the first objective was to develop optimal operation strategies for a single multi-purpose reservoir using stochastic dynamic programming and then to develop genetic algorithm model for single multi-purpose reservoir and compare the performance with the SDP. Until this we have never received any never seen any literature stating that GA can take the stochasticity also. And second objective is extension of that single reservoir system to a multi reservoir system both SDP as well as genetic algorithm, right. The study area considered here I have switched to another study area. This study area is called Kodaya river basin this is in Tamil Nadu and in the border of Kerala. This has a command area of 30,000 hectares. It is a very complicated reservoir system even though the number of reservoir is very less. There are four reservoirs and one diversion we are, right. So, this is the location of the study area Kodaya river basin I will quickly go through this. So, this is the schematic representation of my study area. So, this reservoir is Pechiparai the major reservoir. The second reservoir is Perinchani this Chittar 1 and Chittar 2. This Pechiparai, Chittar and Perinchani are in parallel whereas Chittar 1 and Chittar 2 are in series. So, this is little systematically this is little complicated and also the distribution point the first demand is in this point which can be met only by these two reservoirs. The other demands are from this Puthenvier that is from demand 2, demand 3 and demand 4. Demand 4 is for industrial municipal supply demand 1, 2, 3 are for irrigation. So, here there is no power production it is only irrigation and the industrial municipal demand. So, this is the schematic representation of study area. My kind advice is if you want to supply for want to apply modeling techniques in any water resources system first understand the system. So, these are all some of the figures and the system characteristics its gross capacity net requirement and this is the statistical parameters. The only thing what I want to impress is the coefficient of variation. The coefficient of variation is very less that means there is no much variation in the inflow over the past 30 years for each and every month. In major reservoir Pechiparai is the inflow statistics. It is a time series plot and this is another reservoir Peranjani. Here also there is no much variation in its inflow except during summer that is only during May month. Other means the variation is almost very less whereas in other two reservoirs which are in series the variation is very high. So, a simple LP or DP will not be sufficient to capture this inflow characteristics. See the coefficient of variation it is more than 1, 2 like this. Similarly in Chittar too also the coefficient of variation is very high. So, in such situation it is compulsory to consider the stochastic variation of the inflow and the variation is very high. So, first I will explain 10 minutes the single reservoir operation. So, this single reservoir operation has two parts one is SDP another one is GA model. So, in stochastic dynamic programming the inflow and initial storage are discretized into 3 to 9 number of state variables. Because since it is only surface water the curse of dimensionality does not comes into picture for small number and the number is more than 10 then we end up in curse of dimensionality. So, we went up to 9 discretization of both inflow as well as initial storage. Then the transition probability required to incorporate the stochasticity has been estimated using Lagoon-Margo process from the historical data of 32 years that 32 years of monthly historical data. Then the inflow and initial storage are discretized from 3 to 9 states I mean 3 to 9 ranges. Each range will result in a different operating policies. Then we run the SDP model steady state policies are arrived then these steady state policies derived for various discretization are evaluated through a simulation model. Then the methodology of SDP is we have used the same backward moving dynamic programming objective function is minimizing the squared deviation from the target release and storage and if you see this objective function this is the fundamental objective function for any DPR SDP model. If you take these books by D.P. Laug's water reserve systems planning and management this is a standard objective function. Here the stages are time period I think this morning also we have seen. So, this is still clear picture of how my dynamic programming moves from one stage to another stage. In this time period I will find out what is the inflow, what is the state, state is storage, what is the release benefit. Then a transformation function then it goes to the next stage. So, this is my recursive relationship at the end of 12th month. This is the general recursive morning we can see that you might have seen that I have added one more term that is for stochasticity of ground water variation. Here there is no ground water we have considered only surface water and only one state is there that is storage. Earlier it was release and ground water here it is only storage and this is the general recursive equation which gives the relationship between one time period to another time period including the stochasticity. And this is the termination criteria I run the model for a number of times and if I find the difference between the net benefit of current time period plus number of time steps in my planning period it should be same. So, mathematically the objective function bk ilt is where this is a discretization, k is the discretization of inflow, i is the discretization of storage, l is the discretization of final storage or release and t is the time period. So, I have four dimensional matrix here. It is wrong to say it is four dimensional matrix I can say that fourth order matrix right and which is equal to it is the combined release for various combinations minus target release to the square that means this is a release required and this is a release how much made from the model and its deviation plus this is the required storage target storage and sk is the actual storage resulted from the model the deviation should be less. So, my objective is minimization of this value over a period of 12 months and this is a recursive equation subjected to our continuity constraint and then release constraint, storage constraint and the surplus constraint right these are all non-linear constraint. So, this is the discretization we are talking about how my I divide my storage into n number of ranges. So, here it is a monthly model for example for the range three for each month this is the maximum value measured and this is the minimum value measured. So, I divide this into three parts such that my minimum and maximum are within the class intervals and I have to divide this class interval such that all the interval should have at least one value of measured. So, that I have transition probability of everywhere otherwise my transition probability will be 1 0 0 0 0 1 like that I will show what is transition probabilities. So, like that each month we can divide it and this is for four ranges you can see that same value we divide into four five six and this is an example for transition probability that means the probability of inflow in this range to this range in June to July month is 0.46 that is the meaning that means out of measured 32 years 0.46 times the inflow of this value as entered into this value from the time period of June to July. It is a simple Markovian processes you have to estimate for all the month this for each table will come for each month. So, you will have 12 months for all the ranges I have given this. So, the another big thing here is when I add this range it will lead to a probability of 1. The probability of this range to every range if I add this sum will be equal to 1 for all the cases whatever the number of discretization it is whatever be the time period. Then this is the demand of since we consider the irrigation under industrial municipal we thought it is a multi objective reservoir there is no power production even incidentally also there is no power house in this dance. So, since it is an industrial this demand is constant this is irrigation demand which has been estimated using our modified pen man method. So, this is the total demand in this command area of this reservoir right. So, once we give the input this is the output the output will be given the initial storage state and inflow state what is the expected final storage. That means, if you are inflow is this much and storage is this much then you have to maintain the final storage of this much that is the operating policy it will be in the term of table we have to convert this in the form of a picture. So, as a modeler we keep it in the turbo bit is it will be very useful for us to do further evaluation. So, this is for a 4 state variable for each and every inflow variable that is inflow state for this storage and these are all the final storage numbers it is not 1 million meter cube it is the first discretization that range I have to maintain. So, the steady state policy the developed SDP model searched for the reservoir storage level to be maintained by the end of the month for any given initial storage and inflow state such that the expected value of that objective function is minimum. The steady state policy has been reached when the difference between I think this whatever I have explained Fn plus 12 and Fntk is essentially constant for all discretizations not only for one discretization for all discretization and for the current study the expected annual sum of squared deviation occurred in 48 iteration that means it is solving or keeping the RAM in the matrix of maximum 9 by 9 of 48 times. And then for the ninth range since it is a wider range the convergence occurred at a faster rate within 30 iterations. Then the performance evaluation whatever we do similar to this one we again use Thomas bearing model and studied the performance of this model these are all the four performance indicators we have considered first one is monthly frequency of irrigation deficit how many months deficit has occurred over the period of simulated annual years because same month same year two times might have been occurred deficit but then annually it is counted as one or two months this year one and next year one then annually is also two it will give you how many years the deficit has occurred and this is in terms of million meter cube what is the quantity of deficit and this is in terms of years I think this is ratios number of years so these are all for different cases deficit that is case one is for three discretization case two is for four discretization five six seven eight and case seven is nine discretization. So with the increase in the discretization the frequency of irrigation deficit has not reduced much because the increase in discretization if we increase in discretization itself will results in zero deficit means we are something wrong in the model it will not give you very high range of performance but there is a slight improvement in three discretization it is 348 months deficit out of 1200 whereas if it is nine discretization it is 331 months deficit out of 1200 months that is out of 100 years 27 years deficit has occurred even if you operate using the model and this is the volume and this is the ratio 10% and this is the graph showing this is annual result and this is monthly distribution how for each and every case the volume of deficit the deficit occurs mainly during this November December and January where there is no rainfall in that region this is conventional modeling then we applied a genetic algorithm whatever we have explained in the morning same methodology we have applied for this one also so here the chromosomes is a single reservoir system there are only 12 monthly releases 12 substrings representing the decision variable so it is a single string which represents 12 variables each substring represents one release and the number of substrings varies from one reservoir to another reservoirs then the fitness function is same finding the different squared deviation from the target and the actual release then the selection which population we have to select since if it is hand compute hand calculation we can use our simple calculator since this is large number of population is involved we cannot use it we used this very fundamental method called roulette wheel method you know even many people claim that roulette wheel method gives you a wrong picture but for a simple reservoir operation we hope that it works better people says that tournament selection will be better than the roulette wheel selection so this is about selection of population for mating then crossover there are even though three crossovers are there we used this for one point we used uniform crossover so that this uniform crossover exchanges between the variables not within the variables then mutation we used the modified mutation that means when the performance increase only we will do the mutation otherwise it will be reverted back to the old child chromosomes and this is same GA model how it is formulated same objective function we don't have discretization we don't have ranges we don't have transition probability it is as good as a simple deterministic dynamic even deterministic dynamic programming also you need the ranges it is simple hand calculation objective function these are all the constraints these are all this is non-constrained optimization I don't give constraints in the objective function these are all only rules for fixing my boundaries right so the result even though the decision variables are releases we had only finer 53 binary bits in our objective function 53 0 1 0 will be there since we have considered two values I am putting here only initially we consider the releases as the whole number right then because if I decode it I will get only whole number it is possible to consider the decimals also then my string length will increase if I consider the decimals also so first we consider without decimal and then we consider with decimal also for example if I consider a variable without decimal as a whole number my string length will be 6 or 5 if I consider with decimal then my each string length that means substring variable will vary from 12 to 13 depending upon its maximum value that means my binary bit will go on increasing then in the initial search we made an initial search with an assumption that a probability of crossover of 0.7 and mutation of 0.02 population size is 25 this is a first assumption and we run the program with 25 population probability of crossover of 0.7 you run and get the objective function values this is the initial value we received that is for 25 percent 25 population size of probability of crossover of 0.7 this is my system performance now I am doing the sensitivity analysis I have to find out at which population my objective function gets stabilized so next step is we go on increasing the number of population size but still my iteration generation remains same then if I increase the population size 50 there is a drastic change in my system performance but when my population crosses more than 150 then my system performance stop improving that means after that whatever be the number of population the probability has been well mixed the chromosomes are well mixed with the population of 150 so I assume that the optimal population is 150 for this problem and this varies from problem to problem so with the optimal population of 150 everything was run for a probability of crossover of 0.7 now here I varied the probability of crossover from 0.6 to 0.88 with the increment of 0.02 so this is objective function for a population of 150 with the probability of 0.6 when I increase the probability of crossover the system performance improved and we reached the minimum value when the probability of crossover is 0.76 again it starts resulting in a poor column so from the probability of crossover sensitivity analysis we found that optimal number is 0.76 then with the optimal population of 150 and optimal probability crossover of 0.6 now you vary the number of generation first generation it varied with 25 now we varied it from 25 to 250 this is how the system performance so the generation also had a drastic change in improving your performance so after that it gets stabilized after 175 so we can say that the GA model performs better in this case with a population of 150 crossover of 0.76 and optimal number of generation is 175 the system performance that means my objective function algebraic sum of RT minus DT whole square plus storage is 600 whereas for my SDP model it is 760 so when I compared my releases resulted from actual demand that is the demand estimated in the command area actual release made from the reservoir to meet the demand and release resulted from my SDP model and then GA model release and this is the performance almost most of the years even the actual release is not equal to the actual demand people used to release less than the required demand and it is very surprising to note that SDP model release is more or less equal to the actual demand SDP model release is more or less equal to the actual release but it is not equal to the actual demand only GA is equal to the actual demand right so this is the performance of SDP and GA so it is very clear that GA can take the stochasticity also into consideration without explicitly mentioning this is the stochasticity we are not incorporated what is the stochasticity in the GA model but its performance is better than stochastic dynamic programming model for single reservoir but we want to extend this for multi reservoir system whether multi reservoir system same study area we consider I have shown you the study area so first one we carried out single reservoir system only for this patchy pattern now we considered each and every reservoir as individual reservoir I would extend at this GA model so what is the modifications we have to do here it is not just four times of single reservoir system since the number of reservoir is four but it is not four times it is four by four by four times because the combination of initial storage of reservoir one inflow of that combination with the other two to two combination of other four reservoirs so finally we ended up that we could not do more than four or five discretization even five discretization it was scratching so we could do maximum three and four discretization for four reservoir system because my matrix form is increased that to with a probability of all transition probability of individual reservoir should also be incorporated so this is the objective function for the initial storage and inflow state of first reservoir second reservoir third and fourth reservoir but what is the expected net benefit of current time period for various combinations and also previous decisions with respect to their transition probabilities so this is the general continuity equation for the reservoirs except the fourth reservoir which is in series because the third reservoir which is sitar one which receives the release and surplus from the upstream reservoir the continuity equation is modified like this for various state levels its own initial storage its own inflow and it won't release and evaporation loss and surplus it have an additional input of release and surplus from upstream reservoir for other reservoirs that components will not be there so these are all the release constraint and this is the special release constraint by incorporating all the variables it's generally a mass balance applications similarly the reservoir storage capacity surplus constraints everything remains the same except storages and inflow discretizations so here we could go up to only maximum six that means the storages are discretized into six state variables as a less inflow also discretized into six state variables in all the four reservoirs right so this is inflow of these are all different reservoirs this is for Peranchani this for Chittar see we should be carefully discretized this value describes this value such that your transition probability does not have zero zero values I will tell why what is the problem this is for the fourth reservoir Peranchani Chittar one and Chittar two suppose if this is a transition probability if almost out of six ranges four ranges is zero zero and first range is point five and second range is point five mathematically the total probability is one but other ranges it is zero so zero is a very funny number when we add up this one what happens my solution space will fall only within this zero that means it will never come out of that lawyer space or higher space and that problem is called trap efficiency that is not reservoir trap efficiency this is stochastic dynamic programming trapping so that means my matrix has been got trapped so to overcome this trap there is a simple thumb rule that at least 70% of my discretization should have numbers that means my 70% of discretized ranges should have inflow probability so that is the simple thumb rule that's why we should be very careful in this discretization so this is the demand since we have four different demands in the reservoir how this demand is met demand one has to be met from reservoir one and three and four so first demand it will never meet because this is at the downstream Perinchani reservoir the remaining demand meets 75% of demand is met by this and this is simple calculation our assumptions right the total is 100 if this distribution varies then my entire policy varies because I am keeping or fixing the target demand based upon this distribution so this is how the target release is fixed that is the irrigation demand in this command area 45% is met by reservoir one then that add up that value so the total is same so this is how the distribution is within the reservoir this is in in par with the PWD release policy and this is my objective or optimal operating policy given the inflow state given the outflow state what is the final release state so if I know what is the inflow if I know what is the storage I can tell what is the release to be released from the reservoir I don't bother about any other parameters right so this multi-step SDP the time period is 12 inflow this discretize this summary inflow is described into six variables six through ranges initial storage volume is also described into six ranges the operating policy is identifying the final storage right this is same result same evaluation we used that same performance indicators of single reservoir performance and this is the case so when I use these are all maximum is six this is three four five and six ranges so when I increase the ranges there is a considerable decrease in some of the reservoirs particularly the reservoirs which are in series so when the reservoirs are in parallel then there is not much improvement with the increase in the discretization but when the reservoirs are in parallel there is a great increase in the performance with the improvement in the discretization that's the first from SDP model and still my system runs with deficit these are all the deficit mainly during the months of January and February this is the quantity of deficit 33 to 36 million meter cube the same GA model we extended it here to the multi reservoir system since there are four reservoirs there will be four into 12 releases will be the decision variable so I will have 48 decision variables in overall so when we decode it the decision variables which is equal to maximum value of a decision variable will be the demand of that time period so the total number of binary bits is 240 for single reservoir system it is only 53 it is not just four times that depends upon the combined releases so we had 240 binary bits so each population will have 240 0 or 1 so the same method we followed roulette wheel uniform crossover and modified mutation this is same objective function but constraints are different that the main constraint difference is in mass balance as well as release constraints that is for particularly the reservoirs which are in parallel you have to account the release and surplus from the upstream reservoir so with this setup the initial search was carried out since we had an experience that for a single reservoir system the optimal number of optimal probability of crossover is 0.78 so here we initial population itself we started with 0.78 probability of crossover as 0.78 and population of 0.25 right and this is how my system performance varies there is a drastic reduction from 3,500 to large and another difference we made between single reservoir and a multi reservoir system is in multi reservoir system we have not accounted the variation in the storages because since it is as that will lead a unrealistic picture in your system performance that's why we have omitted the difference between the expected storage and the actual storage so we consider only minimizing the difference between the expected release and the actual release so this is how my system has worked so with the optimal population of 150 we varied the probability of crossover from 0.6 to 0.9 with an increment of 0.02 we found that in the when number of strings increases we need higher and higher probability of crossover for single reservoir system it was 0.76 here we need at least 0.84 to get a better result so with number of generations of 150 and probability of crossover of 0.87 we varied the number of generation so we found that surprising that the system performance that means the difference between target release and actual release is 0 then this will never occur right in reality so again we went back to the stochastic dynamic programming what is the problem see when one model is supposed to give a better result we have to check again our conventional model why it is not able to give so the main problem in SDP model is because of the discretization suppose if I have a finer finer finer discretization even my SDP model will also result in this zero performance zero performance means zero deviation but if I want to that zero deviation I need at least 32 discretizations in my inflow and a storage so which is not possible even for single reservoir system that is why my soft computing technique was so helpful to determine a system which can perform better that means up to the fine tuning level we can operate the reservoir right so this is the comparison of actual demand actual release SDP model and a GA model release for each and every reservoir if you see my GA model release is working very good even for high inflow as well as low inflow months it does not differentiate between high inflow and low inflow month whereas my SDP model differentiate that in terms of stochasticity or transition probabilities see almost it is very high equal to my demand because of zero performance whereas my SDP model is more or less equal to the actual release so these are all for three reservoirs fourth reservoir is a stabilizing reservoir and this is the operating rule curves operating rule curves are the curves which are to be kept in the dam site that means these are all the final storages to be maintained at each and every month right so the system performance surprising not thousand three seventy five only because it is course discretization six for multi reservoir system is very coarse if I want to do no then it is almost impossible even seven discretization I am unable to do how I can go for 13 or 14 to obtain a nil performance nil means better performance or best performance whereas GA model performs better we don't need any discretization we don't need any transition probabilities all these things so the SDP model the annual average irrigation deficit is decreased from 104 to 131 by improving the discretization three to six the values also reduced but the surprising is on comparing the performance of SDP model releases GA model releases and actual releases with the actual demand it is found that GA performed better than SDP right so this is the main advantage of using GA in this multi objective analysis I have not had any problem of curse of dimensionality I have had no problem of trans estimating the transition probability and I have no no problem in modeling the complicated discretization of storages because that story discretization of storages and inflow we need some experience even for the same range minimum and maximum same range some people will discretize into different values by looking the historical inflow so that you will not have any trapping in your transition probabilities right so this is about GA