 If you see any literature, linear programming is mainly used to determine the optimal cropping pattern in an irrigation system. That means how much area should be irrigated under each and every crop. That is my objective function. So, in an LP model, the output is this irrigation intensity. It is not we have to consider, the model will result what is the area under each crop in each season. In LP model, various intensities of irrigation, it will come as an output variable. Whereas in FLP model, yes, we have to consider this various intensities as input. But not in terms of intensities, nobody has considered this irrigation intensities as a fuzzy variable. Because the area under irrigation depends upon our input. Only the objective function values, either in terms of monetary benefit, is considered as a fuzzy variable. Irrigation intensities is not considered as fuzzy variable. I think it may be considered and it can also be incorporated as one of the variables. Then all our formulation will change. The second question is from the given data series, how one can decide that the given objective function is linear or non-linear. It is a tricky question. Various people have tried to determine what type of model will opt for the data availability. As I have explained earlier, the selection of appropriate model. If you finalize that selection of appropriate model, 50% of our problem is solved. But the selection of appropriate model depends upon many factors. First is the data in hand. Second one is the solution methodology available. And third one is the properties of data in hand. Suppose if I have a long time series data, any model, whether it is deterministic, stochastic, or fuzzy or genetic algorithm, fairly they will give a better result. Because if you run any model for a longer series of data, whatever the natural stochasticity available in the time series will be captured by your model. Suppose if you have very short length of data, there only we try various number of models. And moreover for in hydrology, we have seen so many models. If we have an hydrology, we have to do a data analysis. Determine what is the statistical properties. It is conventional statistical properties like mean, standard deviation, skewness, kurtosis, what is its correlation. And then determine in which distribution your data follows. Whether your data is stationary or non-stationary. Whether your data follow any trend. If your data does not follow any trend, your conventional ARMA model or ARMA model will not work better. If you get even R square of 0.5 or 0.6, that is very good. If your data does not follow any trend or any series. And as far as non-linearity, linearity is considered, all these ARMA or ARMA are stochastic linear programming model. People have tried stochastic linear programming model and try to get, if you get even 0.6 or 0.7 as your R square, that is very good model. Whereas our new soft computing techniques like ANN, GP, they are all highly non-linear models. That is why we try to get a R square value of 0.9, 0.95, 0.99. So how to determine the relationship between various variables within the data series. That you have to do by trial and error. First try to fit a linear relationship curve. Y is equal to MX plus C if it is single variable. And then find out what is your R square with the observed. If your R square is very good around 0.9, then linear model is sufficient. If your R square is not good, then you have to go in for non-linear models. So this is true. For a given data series, whether it is an hydrological part or water resources part, you have to do a data analysis. And selection of this linear and non-linear, once you start working over the experience, you will come to know what type of model you have to select. Nowadays, even now, optimal cropping pattern is considered as a linear programming model. Whereas reservoir operation, time series analysis, all these things are considered as non-linear model. Because of longer length of data availability. I think I will go to my first lecture. Previous class, we have stopped up to a linear programming and a fuzzy linear programming. People have tried not only this fuzzy linear programming model to compare it with simple deterministic linear programming. There are various, I have explained, there are various forms of linear programming. Starting from chance constraint linear programming, stochastic linear programming, then hierarchy and linear programming. So that depends upon the complexity of the problem. Most of the water resources problems, we consider the inflow as the stochastic variable. It means probability with respect to time. Many modelers try to incorporate that stochasticity into the deterministic form. In a form called chance constrained linear programming. The deterministic linear programming model is also sometime we call it as a chance constrained linear programming. Because we may run our linear programming model for 75% dependable inflow values. For an average condition, 90% dependable inflow value for a drought condition and 50% dependable inflow value for excess water availability condition. Suppose if I run my linear program, deterministic linear programming for various inflow probability of occurrence or 75% probability of occurrence. Similarly if I start from 50% and go on increasing 55%, 60%, 70% I get various scenarios and this type of model is called chance constrained linear programming model. Since I have to take many things, I will just quickly go through this. The same linear programming model which I have explained, LP and FLP. We have modeled that as also a chance constrained linear programming model. I will just quickly go through that and I will show you only the result. So here the objective function is to derive the same optimal cropping pattern for the large scale irrigation system. And very important is to compare the effect of annual and monthly probability distribution of inflow pattern. In previous LP or FLP, we never consider this probability of inflow pattern. In deterministic LP, I consider only one set of inflow variable. In FLP, I consider the range of inflow variable. In chance constrained program, I have to find out what is the distribution. What is 50%, 60%, 70%. And you might have heard about how to determine the chance of occurrence or probability of inflow. That we use conventional Weibull method which we use for rainfall runoff. For annual inflow, find out what is the probability at return period, what is 75% probability of occurrence, what is 50% probability of occurrence, 60% probability of occurrence. Once we know which year this probability of occurrence occurs, then we consider that year as the deterministic LP model inflow. But there is another way also. Instead of considering the annual data to determine the probability of occurrence, you can consider monthly data also as probability of occurrence. But that is not fair enough. Many researchers have not distinguished between these two. Whether we have to consider the probability of occurrence for annual data or we have to consider for monthly data. So, here we consider both and applied the chance constrained linear programming model considering annual inflow as a chance constrained probability and also monthly inflow as the chance constrained probability. We have applied to the same Sriram Sagar project because it will give you fair idea to have the same linear programming model. So, this is the same study area which we are seeing. So, this is the time series plot of our inflow into the reservoir. You can see that this is 32 years of data from 1970 to 36. Each year determines to a or each peak represents to a year. So, nature has a trend. The magnitude may be varying but the trend is there. So, these are all the some of the statistical properties. So, this we have seen already the cropping pattern. There are three distinct seasons. One is Karif, another one is Rabi. Then we have an annual crop and we have bi-seasonal crops. That means this crop starts in one season and ends in other season. This is input data for our crop water requirement constraint, net irrigation requirement. I think this model LP model is same. Where the different comes is this we have seen in detail. This is the place where we incorporate our chance constraint in inflow model or in mass balance. Instead of considering the inflow IT as a single monthly inflow values, I consider various probability of inflows. So, for each and every probability I have a scenario. These are all overflow constraints. So, we solved this using the same revised simple method. There are 99 variables and 156 parameters. There are 141 constraints. Since unknowns are only 99, constraints are 141. We can mathematically we can easily solve this problem. And we as I explained we have considered two probability distribution. One is annual inflow probability distribution. Second one is monthly probability distribution. So, depending upon whether it is monthly or annual, we have developed 9 dependable inflow levels starting from 50 to 90. So, we have totally 18 models in this LP model. So, 18 chance constraint linear programming models. So, if I consider annual inflow as the probability and consider that year as the year of 70 percent occurrence and if I take that value instead of which if I consider 75 percent probability of each and every month separately and if I add that value it will never be equal to the annual 70 percent dependable inflow level. That is the nature. So, what happens many researchers have considered on both the ways and they have not distinguished which one to be considered. So, what happens if I considered annual probability then this is my quantity of water. Available for solving the problem. If I consider monthly probability distribution this is the probability. So, what happens if in nature we have to follow only annual not monthly because the stochasticity of one month to another month is fixed. That is how if you have more rainfall in curry season you will have less rainfall in Rabi season. If you have more rainfall in Rabi season you will have less in curry. I cannot consider all curry will have major rainfall and all curry will have minor rainfall. That means here itself if I want to show that system is working better I can consider monthly probability distribution to the authorities. But that is not correct. As a modular we should not do that. Even results also to display it this monthly probability distribution results are very good to distribute. So, this is the statistical relationship between my evaporation constraints this we have seen in earlier. So, if I run that model 9, 9, 18 models I have segregated between monthly model and annual models. This is the net benefit for various inflow levels. When the inflow level probability of dependable inflow level percentage increases the quantity decreases. So, higher the quantity the trend is same. The trend is same, but there is a variation in the net benefit. If I consider monthly model I have more benefit because my quantity of water is more. Similarly in irrigation intensities also there is an outlier except otherwise the trend is same the irrigation intensity also lesser if I consider annual probability of inflow. This is the cropping pattern under each and every canal for various monthly models. Everywhere I have an irrigation intensity of 150 if I consider monthly model. These are as the optimal cropping pattern for canal 1. This is for various probability level almost same only maize and ground net varies. This is for the second canal where I have various there are 99 variables in this 33, 33, 33 in each canal. This is for the third canal optimal cropping pattern. So, this is the release. For looking this releases are better than annual releases particularly during Rabi season. That is why more area comes under this season because this command area lies or enjoys the south-east monsoon that is Karif rainfall. When I consider probability of distribution of monthly model it says that monthly model Rabi also you have higher inflow. So, that gives you a wrong picture when you do the modeling. See this is the releases for each and every canal for second canal and this is for third canal and this is evaporation loss. See if I consider annual model this is what the reality is. There is no smooth relationship between one month evaporation to another month evaporation in reality. So, that can be captured only when we consider annual inflow as the probability distribution. Whereas if I consider monthly model it is like a paint. It is like a paintbrush curve which never occurs in reality. So, similarly rule curves also. Rule curve means the volume of water to be released for various monthly or annual inflow probabilities. If I consider the annual monthly inflows then I do not need to operate my reservoir. It is automatically it will take care of. Whereas this is what we have to look into our operational rule curves to release our each and every quantity in each month. So, I have just compared the relation between considering the monthly probability distribution as well as annual probability distribution. If I consider annual the net benefit is 1.9. Monthly it is 1.72 crores. For same probability distribution the irrigation intensity for annual is higher for monthly it is lesser. I think same pattern does not show any trend releases. This shows a beautiful trend because for a policy maker or a politician they say that this curve is better you implement it. But that is not possible to implement. So, as a conclusion from this study it appears that monthly probability distribution has resulted in better way of presentation but in reality it will never occur. Because suppose if 75% inflow probability inflow has occurred in curry season then in rabbi season you will have only very less 25%. In curry also you will never have 75% probability inflow levels. The stochasticity of inflow from one month to another month is occurring within the planning period. Hence it can be concluded that solving optimization with various levels of inflow the probability distribution of planning time period. Because for us year year is the planning period. So, I have to consider the probability of planning year rather than the monthly that is within the planning time period. Even though it gives you good result to visualize. And when we compare this with FLP model we have seen that to get this result I have to run my model at least 18 times. Whereas a better result has been obtained without running 18 times by running only one FLP model. So, that is the advantage of FLP model. I no need to do this mundane job of repeating the running my software for n number of times to get the result and an handful of result and to select which result I have to implement. Rather than that the soft computing technique will give me a better result with the same type of input by slightly modifying the method of solution.