 Good afternoon to all of you, today my presentation is on an introduction to evolutionary computing what are the latest evolutionary computing techniques that are available and its application or one or two applications to water resources problems. So this gives a brief outline of this today's presentation, brief introduction to mathematical modeling and what are the conventional tools that are available and evolutionary computing techniques in that again it is genetic algorithm, particle sum optimization and current optimization. I will briefly I will touch all these algorithms and then I will apply I will also present for a few case studies or not two case studies and finally conclusions. Okay coming to mathematical models we can see that mathematical models we are representing whatever the problem that is we have in real world in terms of a set of equations it may be a simulation model and we also often need to solve through optimization. So simulation basically what we will do we will try to generate will represent whatever the physical system that is taking place in the real world that will represent in terms of set of equations then we will try to find out if this is a input what will be the output that kind of simulation we will do through simulation models and then we will in optimization we will try to find out the best alternative whatever the possible it may be a wide number of alternatives will be available then through optimization we will try to find out the best solutions that are possible. So it may be minimization problems or maximization problems or single objective it may be a multi objective optimization problems or it can be combination of all these optimizations. So the traditional most conventional technique that we see that is calculus based techniques that we mainly it is analytical techniques where we will find given a function y called fx we will try to find out the function derivative and then we will equate to 0 and then we will try to find out the stationary point and we will then again we will check with the second derivative and then if it is a negative value it will be a maximization problem or maximum point will occur at that solution if it is a positive value it may be minimization problem so we will get a minimum solution. So these solutions whatever the conventional analytical models that we call calculus based techniques it may work perfectly for simple problems if we have everything as that means derivative is possible and whatever the objective function we have we have directly we can get a objective derivative of this objective function then we can apply these techniques but most of the times it may not be possible to simplify the to the extent. So another base another technique conventional technique is gradient based technique or we can also call it as hill climbing techniques where given the function y called fx we will pick a point x0 how initially it may be starting it may be at a specified location or whatever the initial point we are choosing then we will try to find out the gradient of this function whatever the objective function we have all these are we are talking about single objective functions and it may not be having constraints but if we have constraints again we will see how we can deal it and then we will try to find out after computing the gradient of this function we will move the search whatever the iteration we want to next step we will move in this direction whatever this we have here x1 equal to x0 plus alpha into f dash x0 that means f dash is our gradient of this objective function and then we will try to move towards the optimal point where whether it is a minimization problem if it is a minimum point if it is a maximization it will be a peak point. So we will but again you can we can see it here if it is having the function if it is having only global minimum or maximum only one peak then it can be possible to find out that point but it the function may have multiple peaks so there we may not get global optimal points. So again this kind of techniques it may have the problems like this and the other category of techniques enumerative based techniques means we will try to evaluate each and every possible alternative through that means we will like what we will do in dynamic programming we will classify the each state variable into a number of discrete states and then we will try to evaluate each possible point and a combination of points will test it and with finally we will evaluate particular solution that is we call it as global optimum but again it is a discretization all these problems will be there and it may not be possible for a large scale problem where we may have more than 7 or 10 we may have more than that number of state variables there it may not be possible to evaluate all these alternatives. So these techniques also have the problems with computational point of view and another optimization techniques so it is basically random such techniques those are leads to a number of recent evolutionary computation techniques where we will test every point means whatever the point where initially we may give a some random point we will choose a random point in a space by decision space and then we will try to find out our number of iterations or generations what we call in the evolutionary computation and those things we can see that evolutionary computation techniques so these techniques may not require the derivative of that means objective function and it only what we have the objective function that will be enough we do not need to find out each time this derivative of this objective function and we are trying to find basically evolutionary computation techniques it will be using a set of points that is a population of points. So the SAM initially itself we are giving the setup solution we are choosing such that it will be widely distributed or we are selecting the points over a large number of or over a wide spectrum of decision space we are selecting some points or we are initializing the points randomly and then we try to set the optimal solution there it is a possible to find out even if it is having a multiple peaks. So there is a chance to locate that whatever the global minimum or global maximum that optimal point we can get. So this slides which gives a brief class fiction of this search techniques basically as I told first one is a calculus based techniques and the enumerative techniques that is a dynamic programming techniques and the middle portion what we have this guided random search techniques where we are incorporating whatever the knowledge or whatever the heuristics we know that is guided into random search techniques that means apart from random search technique whatever the knowledge we know about the problem or about the model we are incorporating then we are guiding the random search technique that means again we can classify in these techniques evolutionary algorithms, swarm intelligence or etc. Again basically evolutionary algorithms are popularly known as evolutionary techniques but in broad sense they can classify starting from early 1960s or 59 sometime they started with evolutionary strategies and evolutionary programming and genetic algorithms these are basic algorithms that have been developed in 60s and 70s and later on many variations of genetic algorithms have been developed and still they are developing are still some lot of research is going on and finding the best possible heuristics. So again this another field of recent search techniques is swarm intelligence where we have this again it is also inspired by whatever the things that are observing in real world that is it may be swarm intelligence means it is a cooperative group intelligence of the swarm that is model through a set of equations or through a suitable heuristics and then we are trying to find out use it as a optimization algorithm. So again you can see that selecting an optimization technique to suit a particular problem it may not be a straight forward but it needs a expertise and depends again it depends on the problem which type of problem it is and the choice of optimization strategy so it is very crucial for the successful solution in the context of solution quality also in the context of time consuming or how much time it is taking. So both these things we have to consider while selecting this optimization techniques. So there are many okay here I have shown many of these parameters that may influence like type of decision variables the type of objective function or it is a any type of very it may be concave or convex or non-differentiable or differentiable whatever the type of objective function it is or what type of constraints it is having whether it is a unconstrained problem or it is a constraint problem or shape of a decision space or number of decision variables each cost of each simulation that means time and whether it is having a linear or non-linear functions that means a linear constraints or linear or non-linear constraints or objective functions also and availability of derivatives also it is a whether it is having a single peak or multiple peaks and all these things we can take care about while selecting this optimization suitable optimization technique. So coming to the topic and evolutionary computing techniques so the motivation is learn from nature that is or whatever the natural behavior is model and those things are used as computing tools in evolutionary computing. So the famous principle that is Darwin what is proposed in 1859 in his seminal works that is origin of species based on natural selection that is survival of the fittest concepts that is being used in all these evolutionary computing techniques that is a basic principle but later on various modifications and whatever the convenient way to solve a particular problem based on that they are modified or they are chosen. And so again evolutionary computing it is developing at various stages and it is starting from 1960s and in 70s J has been genetic algorithm has been proposed in Holland by Holland John Hall and there are students Dijong and other researchers. So again evolutionary computing so as I told it is basically in the starting 1960s or 70s it is basically classified into evolutionary strategies, evolutionary programming and genetic algorithms. We can see the technique evolutionary strategy which is a simplified version of this evolutionary computing technique where it will have only mutation operator will use and it will have only two population of two but that means only two points at a time. Those things only will be manipulated or modified and it is using over a number of generations and finally resulting in optimal solutions. So it is basically used for real value parameter optimization. So the other computing technique evolutionary programming this is also used in machine learning context and in 1916 is proposed by Fogg-Lauvin and Walsh and then under group. It also uses only mutation operator but genetic algorithms we have both crossover and mutation operators what we will see in next few slides and again it is having various type of parameters whether it is binary coding or real coding or whatever the coding we are using based on that. This is a standard version of evolutionary algorithm where the procedure is it is only having four steps or five steps where we will starting from we will initialize at the starting point that is random initialization of the population or whatever the members we want then we will evaluate the population and then we will that is fitness function evolution that whatever the objective we have based on that we will evaluate it and then we will check what the termination criteria if it is not satisfied then we will move step by step until we reach a number of iterations or number of whatever the termination criteria what we have. That means here again we will do what will genetic operations or crossover or mutation or if we have well that also we will do and we will use the selection operator where we want to which one to select to next generations. Those things all these things it will be done this loop. So again this procedure is same for all these whatever the algorithms that I told evolutionary strategy evolutionary programming genetic algorithms whatever the other variations of genetic algorithms it may be differential evolution or it may be a genetic programming or any other variation. So as I told genetic algorithms are basically search and optimization techniques which are inspired by the natural selection that is the governance principle where we have the survival of the fitness concept and all these things as a tool other operators that means basically mechanisms are representation of the problem that is the first step what that we need to do in this genetic algorithms. It may be a representation it may be a band recording where we have to select a number of parameters and other parameters also we have to select mutation operator and which one we have to select and crossover operator which one to select or it may be having several variations of these operators but all these operators which one gives the best performance that we do not know exactly but people try to evaluate over a number of test functions or a number of practical problems that we will face and then we will recommend these are the best possible alternatives to this kind of problems. So okay let us see first simple type of representation that is band recording where we will have a string that means the total we call it as string that means whatever the decision variables we have it may be a 1 or 2 or 10 or 20 or whatever the number of decision variables we have we had to select a big substring length or string length it is total whatever the number of decision variables we have it is a total combination of all these decision variables but each decision variables we can code based on its accuracy we can select the number of string substring length how many these binary variables that is 1 0 0 0 this is a 1 decision variable here we have 4 decision variables we have 4 substrings and totally we have 1 substring. So we are encoding 4 parameters the first parameter value as we know this first what will 1 0 0 0 this is a 1 decision variable if we have what will have the total the value of this basically what we are using we are using only the here 1 0 digits which represents binary coding those things are I think most of you knows and as here you can see clearly as it is the coding decoding is nothing but it is just 1 multiplied 2 2 4 of whatever the which place it is having 4 4th place it is have the position is 4th place. So 2 2 4 of n minus 1 that is 4 minus 1 2 2 cube into 1 that is 8 and other 2 2 0 into 2 2 4 of 2 2 4 of 2 2 2 0 plus 0 into 2 2 4 of 1 plus 0 into 2 2 4 of 0. So all these anyway 0s so the final quantity is 8. So like similarly parameter 2 we can have this value. So initializing of population is as I told we have to select we have to select the population initial population in such a way that it is all the members or widely distributed over the entire such space. So again it may be requires knowing the decision space that means which variable range of this decision variables if it is required to give as a initial population. So based on that we can generate the initial population. So fitness function what we have objective function is the fitness function if you have if you are converting unconstrained problem into unconstrained again we may use some penalty terms and it may be added in the fitness function. But for unconstrained problem it is or whatever the objective function we have that is a fitness function. So again as I told whatever the string that what we have and we have total the function takes as input an individual that is chromosome what we have that means for binary coding we have the chromosome representation. And then function return in whatever the input that is the decoding after decoding we will give the input to the objective function. Whatever the objective function it will give the final numerical value. The numerical value we will use as a fitness function to evaluate whether it is a good solution or bad solution or whether it is a fit to the next move or not or next iteration or next generation. So these are the things we will use in selection operation and so selection is a survival of the fitness that is a concept of this evolutionary computing and we will try to select the best individual we will also try to find the some of the wide spectrum of solutions. That means it may not be all the time we select only the best solutions it may be converging to local optima we also try to incorporate some other mechanisms to avoid those principles. And based on fitness function we will try to find out the best selection criteria to move to next generations. So it may have several variations of the selection operators like proportional rule where we will have only based on its fitness if it is having a better fitness or higher fitness it will be selected or it will have higher chances of selecting to next generation. If it is having a worse solution or bad fitness function we it will have the less chance that is what we will do in proportional selection. And then rank based selection where there also we will rank the population it is past one is directly it is having the magnitude as a quantity that is a chance of selecting that member or point. But in rank based slightly it is having a ranks only some numerical value one one two three if it is a best one we will rank it as one and based on then we will move to ranking to up to forest fitness or number of it again it is a the forest fitness is having the population size whatever we are taking that will have this forest rank. And then other ranking is again this rank also rank based selection criteria also it will have the similar kind of problem what we will have in proportional selection means the best selection best fit fitness functions will have higher chances to selecting or sometimes most of them will be selected in next generation. So there is a chance of losing wide spectrum of solutions. Then the other thing is tournament selection where we will have a subgroup we will classify the total population into a number of subgroups and based on that means a tournament will each subgroup will call it as a tournament that means in that again we will select the best solutions for the next iteration or generation. So these are basic one point crossover operator which shows if we have two solutions with first chromosome and select two parents and how the children are across if we cross at this point how the children are having this see here we have all zeros and ones and black and white here we can see the children are having different traits that means having a different kind of some of the traits which is having the second parent the child parents or sharing. So that means there is a possibility of locating the optimal solution at a different location whatever the solution we have with current iteration. So next iteration it may be different or based on this we can see and mutation is basically what we will try to whatever the performing crossover operator we will try to perform mutation operator in order to avoid if there is a chance that based on mutation probability the crossover is taking crossover probability crossover is taking place. There is sometimes if crossover if we give this 0.7 or 0.8 there is a chance that the member it may not be changing so or other people other members also may not be changing there we have the role of a role for mutation where we will try to alter the whatever the solution we have based on this mutation operator we will try to change the solution like what we have here parent we have all the ones and if you want in see you can see based on it is basically a simple mutation operator where we chose a tossing a coin and each time whether it is whether we need to perform mutation operator or not we will do that for all these genes what we have here all these up each one one one all these have we have if we it is performed we will change the boundary code 1 to 0 or 1 to 1 if it is not required it is one same it is remaining like this will do in mutation operate again it is really mutation of probability based on that we will be select we will be performing this mutation operation so okay as already talked about this selection operator basically we will have this past one is the proportional rule what we have roll at will like we see here it is having the higher fitness value so it is having higher chance of selecting this region and this one obviously will have lowest probability of selecting that region okay this is a simple GA flow chart where we will have all these operations taking place initialize a population evaluating the evaluating the members and we will checking termination criteria and then selection crossover mutation and moving to next generation and all these operations we will do what we will what we have seen in standard realistic computing procedure the same procedure we have represented in flow chart manner so same operator we will do one-point crossover bitwise mutation roulette field selection random installation if you select this is a ticket from Goldberg 1989 for a function if it is having a maximization of x square as a function fitness function over a decision space of this range and this is what how we serve the selection operator is taking place how the initially we will evaluate this population of random initialization what we will do it is initialization what we will do we are binary coding that means we had to decode again we are decoding what we have seen here based on this representation we are decoding and we are getting the values numerical values and then we are substituting these values in fitness function and then we are getting these values that is fx equal x square that is x square is the 13th square 169 like that we are getting all these fitness function for each member of this population so then we are trying to find out the probability of each member so that means whatever the fitness function it is having the divided by total of this fitness function of all these members that is what probability that is chance of selecting that member and this is a expected count which is a have a also which is equal to the whatever the probability this member is having divided by the average of this all these members probability that that's what it is expected count and this is a numerical value if it is greater than 0.5 we have one that is a chance of selecting having one it is around 2 it is if it is a lesson point we are just we are we are taking a whole number so you can see the third member is having lowest chance of selecting and gradually which is a worse solution that's what here we have this fitness function so similarly we can see the crossover operator the whatever the see we have seen in that the promo zoom earlier slides the same crossover operator is taking place and here as I told it is a single crossover operation so it particular location only it is taking place and based on that we are changing the crossover offspring values or whatever the chromosome we have then x values and fitness function similarly we are evaluating and mutation operation here we can see that the changing change in these values bit values whether it from mutation 0 it is having one and sometime here mutation operation not taking place it is one one similarly we will have decode the values and then we will find out these fitness functions so the termination criteria finally we had to perform the termination we had to check whether it is a solution is acceptable or not so this termination criteria depends on individuals or it depends on the which way we want to get the try to find out the optimal solution so it may be the it may reach the best termination criteria it may be if we know the global optimum whether it has reached or not we will check and we can directly put that otherwise we can select a maximum number of iterations a thousand or a hundred or based on its complexity of the problem then we will select this is the maximum number of generations you have to perform then if it is exceeding that number of generations you can give the final whatever the solution here as a best solution and sometimes we can also give a stagnation criteria where we will try to find out whether the solution it is whether it is improving or a number of generations if it is not improving we will terminate that this is a whatever the solution we got suppose if we take a 50 number of iterations or generations if it is not improving over 50 number of generations so whatever the solutions we have before the 50th iteration or generation that solution we will take it as a optimal solution so most of the times it may be possible to find out near global optimal solutions but it is also possible to call by local optimal solution so it may not be guaranteed of global optimum so alternative crossover of a multi point crossover operators so we will have it at several places crossover taking place and uniform crossover it is also similar kind of operation but here we have a number of parts at each and every digit we will select choose this crossover whether this and see again what we have evolutionary strategies and evolutionary programming what we have seen there is no crossover operator but in genetic algorithms we have that crossover and mutation both the operators but again see basically this crossover operator it may useful to explore the wide spectrum of solutions where the exploration is required if we if the population is converging at a specified location or at local optimal solutions so there is a possibility of exploring a wide spectrum of or other regions of or non-explored earlier unexplored regions those things are possible by using this crossover operator whereas mutation operator try tries to find out the solutions which are near to whatever the solutions we have at a particular decision space it will tries to find out near global optimal solution that means it is tried to speed up the performance of this algorithms so that means exploitation is possible with this mutation operator and exploration is possible with this crossover operator so sometimes the elitism what we have we don't want to lose the best possible solution that are found at each and every generation so we will sometimes we will try to keep whatever the best solution that we have found at the current generation we will simply copy this solution to mode we will keep the next generation that means we are not disturbing the solution but we are replacing we are choosing it as as it is so so these will be useful in sometimes there is a possibility it is also helps well crossover or mutation operation is taking place and it is also helping to find out the best again it is searching for global optimal solution in the next generation so obviously this idea will significantly improve the performance of this genetic algorithms okay next topic of this presentation what is swarm intelligence so I'm intelligence basically these are inspired by cooperative group intelligence of swarm so this if we define if you want to define it as a precisely so I'm intelligence a term used to describe the algorithms and distributed problem solvers way which were inspired by the co-op collective behavior of insect colonies and other animals that means here also we have the information sharing among the swarm that is a basic idea that we'll be using in this swarm intelligence algorithms these are also basically optimization algorithms okay first we'll see particle swarm optimization technique so as I told it is inspired by studies of social behavior of birds insects and animals the basic principle is as we have what we have in genetic algorithms we have that surveil of the fittest as a basic principle here the cooperative group intelligence as a basic principle that means slightly it is having some kind of natural evolution kind of thing but basic principles are slightly different so here what do you mean by particle swarm optimization means here the particle swarm optimization it consists of position or we'll see it in next generation but it is a PSO is a model of social information sharing which combines up basically we'll have a group of that what we'll call it as a population of solutions and there we'll try to find out the at each iteration we'll try to find out the best solution among this population we also have whatever the member that at particular member which have the best solution that has found in the previous iterations those two that what we call private knowledge is that each individual member that has gained the knowledge what is the public knowledge is the knowledge that has entered group as gain over the iterations past iterations so both this knowledge will combine and will try to model as a PSO particle swarm optimization algorithm so it may try to find out the global optimal solution but it is also since it is a population based search technique so there is a possibility to find out the many suboptimal solutions also so basically a particle swarm optimization particle consists of position that is addition space where it is located and it will also have the velocity that is velocity direction which direction to move that direction that we call it as velocity and the thing is what we have best position with respect to individual member and also with respect to global best or that is private knowledge public knowledge what you call it in earlier last slide the same thing it represented pk and velocity and position updating rules these are the rules that are representing this algorithm okay the basic idea again we have what we have here the moving particles in the search space which define their knowledge by interacting with their members and tries to find out the best possible solutions so this is a basic idea of this algorithm okay the velocity and position updating rules here we can see that there are three components first one is that is what we have inertial part that is what we have in last iteration what we have in which direction the particle is moving that's what it gives that is that means initial inertial part maintains the direction the second part which is a private based that is the what we individual member having the found out the best solution that component this is that component tries to find out it turns toward the velocity direction towards the its best position the third component which tries to it's tries to till the whatever the direction it is trying to the social part which turns the velocity direction towards the public best okay these as I told you this is what we have knowledge that earlier iteration this is up what we have that individual best and this is what the group entire group has the best these here we have the several parameters we have here the velocity at previous iteration and this is up some inertial component that will choose parameter is a basically parameter initial parameter and it will affect the how much weight is we can give to last iteration velocity direction and this is up C1 C2 we call it as again constant parameters as cognition parameters and social parameters and it again have which how much weight is we can give to in the two components and our own or two are basically random numbers that will generate it may be a 0 to 1 will in that range we can generate and this is up what we have the best position for the respective particle and this is up what we have previous iteration this is a position with respect to a global best position and this is a position updating what we have whatever the position that we have in the previous iteration and this is up what we have new velocity that we are adding these two and we are getting the new position with respect to new velocity so again you can see here we if you want to put as see here it may be a position it may have the units of centimeter but here velocity we call it as meter for distance but time but it may we can multiply with respect to a time step we are taking it as one we may have the same dimensions so there is a possibility of good balance of exploration and exploitation by properly pointing these parameters but we have omega C1 C2 these parameters will have its role while properly exploring and exploiting the search space so this is a pseudo code of this algorithm where we will generate initial swamp position that is what we have done the same thing for other evolution computing techniques and we will try to generate also we also try to initialize the velocity directions that we may we can choose as a random quantities starting generation and then we'll repeat a number of iterations for this generally what we have the PSO operations that we have velocity and question-updating rules and in each iteration we'll also try to find out the fitness function based on that only we are judging whether it is better or better or worse solution so based on that again we'll repeat this loop or a number of iterations and until termination criteria satisfied the termination criteria similar to that what we have seen in that it may be having the maximum number of iterations has reached or it may be it is not if it is not improving over a number of specific number of iterations and then we can select any number of any termination criteria this is a one simple problem application of this algorithm which shows basically this is a function maximum maximization of this function JD code x into exponential of minus x square minus y square if this is a function we want to maximize over a specified addition space or a feasible decision space we are selecting as a decision space minus 2 to 2 and minus 2 to 3 this x variable as y variable correspondingly and initializing this population all over this decision space and after 10 iterations so we can see that most of the particles are moving towards the which will have the best solution here we can see the scale of this fitness function this is a small and as it moves to red color it is having the maximum value here here this solution as at iteration 20 you can see just converging towards the global optimum finally we'll try to find see that all particles are converting converging towards the global optimal solution so case study so this is a difficult application to a reservoir operation problem which is the most state of the art most of the studies I have applied based say linear programming dynamic programming there are several variations of dynamic programming or applied and tested whether the models whatever the techniques that I develop where they are working or not they are working properly or not they are checked and here typically we have this hypothetical case study what we have the four reservoirs in this four reservoirs we have the power production from at all this that is power houses we have it all these four reservoirs apart from this production we also have the irrigation as a another purpose of this reservoir so what we have this we have to maximize the benefits of from whatever the system that we have we have a four reservoir system we want to maximize the benefits overall benefit from this system so the system dynamic equations we can represent what we have the mass balance equations like for all these four reservoirs basically this is a storage at t plus one period that is a time period t is a time period this is the inflows and this is a release and we are to for all these reservoirs we have this system dynamic equation that is a basically mass balance equations and we have the what we will initially we may have storage initial storage for all these reservoirs that is that's what it this vector represents and this is up all we also have constraint or we also want to decide that is the end of storage should be this much first reservoir second is about third reason fourth reason and subject to a number of constraints the first four constraints are subject to a reservoir storage and second four constraints are subject to reservoir release or which should be within this limit but within these limits we want to maximize the benefit we have the benefit here maximize the benefit from releases made to power production and irrigation from all these four reservoirs and which way if we release it will have the best performance that's what this objective function this is for four reservoirs hydro power production for 12 time periods and this is a benefit from this release for irrigation purpose that's what single one reservoir here and if you want to use the constraint method it is a constraint problem so we can use a simple penalty function approach to solve this to make it to unconstrained problem and then we can have this same fitness function and we can use it but again we can go for several variations of this constraint handling methods and those things also possible okay similarly what we have here this is what we have this problem this is the same what we have shown in the pseudo code the same thing but additionally what we have here reservoir operation model is a we are using it for fitness function evaluation that's what we are interested the code is same for this PSO algorithm but additionally we are using a simulation for simulation model for reservoir operation problem and then we are trying to find out this best possible solution using particles of optimization algorithm and application of PSO so these are the total as it told it is having 48 decision variables from all these four reservoirs or the basically it is having 12 time periods and each reservoir having release that is 48 and these are the solutions which are compared basically particles of automation genetic algorithm and some variation of this basic PSO algorithm is that has developed by ourselves or Nagash Kumar and myself and this algorithm and we compared that performance of with all these algorithm and these are the release policies that have been obtained and these are the performance comparison what we have basically this is a dynamic programming variation that is given in Larson that is a that's what source of this hypothetical case study and some other technique variation of discrete different differential dynamic programming that is highly he also find out the best possible solution but he has tried so many iterations that is again the dynamic programming it needs a initial input as a solution that initial solution we has to give somewhere but it may not find all the time it may not find the global optimal solution but luckily he got the with some initialization and that he has shown here and fully dynamic programming and particles of optimization genetic algorithm so here the PSO basic PSO has having some near global optimal so achieving near global optimal solution this much number of functional evaluations and GA is taking quite high number of functional evaluations and MPSO it is taking less number of functional that is compared to these two but it is trying it is giving the best global optimal solution what we have that so advantages of this algorithm as I told it is a great it is works only with objective functions so it is we don't require gradient of this gradient information such kind of thing and it may try to overcome the global optimal of local optimal solutions so that may be a advantage over a traditional optimization technique and PSO use subject to function information that's what we have and PSO is a kind of stochastic search algorithm that can search a complicated and uncertain area by using this exploration and exploitation mechanisms of these PSO parameters are PSO updating rules so we also see it is our genetic algorithm also it is having faster convergence towards the global optimal solutions as compared to genetic algorithms since we are incorporating somewhere heuristic that we have in this algorithm so it is resulting in faster convergence so this is also again it sometimes if it is a complicated problem it may be a problem with respect to this algorithm that means it may be caught by local optima near global optimal solutions okay the another technique that we want to present to you and to call an optimization algorithm okay this algorithm basically it is it is also claspid under Swarm intelligence technique but again it is a population based search technique for the solution of complex combinatorial discrete optimization problems so this is the best suitable for discrete that means what we have the traveling salesman problem or assignment problems what we have those such kind of problem that means discrete variables for that it is suitable the basic concept of this ACO is cooperating such a pair of real ants in their search for food the similar kind of behavior we will model through some kind of mathematical equations that means the principle here we can see that these are the ants if we are at h is the nest if there is a obstruction in between and there is a food this point f at it at iteration t equal 0 they are trying to move randomly in both the directions it's some it's some point of time or iteration next iteration which the path which is having this is basically path tracking mechanism where we have this shortest path which you will have obviously it will raise the food source faster and then they will try to come back quickly and it is iteration we can see that and again we can see that there is a these ants they will deposit a trait called ferramid ferramid that is they will have some chemical deposit that they will deposit while moving in their path so that that mechanism has been used in this and calling optimization that means the pyramid it is as a ants move more and more ants move the more and more chemical deposits in the path so that path it helps to next or next time some other ants will follow that path or there is a higher chance to follow that path which will have so those principles have been modeled in this and calling optimization so what we require in this it is not a straightforward thing it requires some mechanisms to adopt this to solve any problem may be a water resource problem may be a industry engineering problem or whatever it is but here what we have we need an appropriate representation of the problem as a graph or a similar source that is easy that can be easily covered by ants what we have this similar path tracking kind of thing we need to have some kind of representation so again what we have here selection of heuristic information whatever the problem domain knowledge that we have the that information we had to give as heuristic information to this algorithm and those mathematics we will see it in next slides and we also requires a fitness function as it is clear from other algorithms also what we have seen for evolutionary computing methods or some optimization methods we need a fitness function that we obviously will get from the whatever the problem we have and selection of proper firm in updating rules that we anyway it requires for ants modeling ACO modeling so here we modelled it for reservoir operation problem where the problem is approached by considering a time series of inflow classifying the reservoir volume into several intervals that means we are classifying the here we have this is we have classified into a number of discrete range intervals the whatever the if we have 0 to 1000 we can classify into a number of intervals and that's what it is representing in storage volume classes and again it is a way to operate it a number of time periods that means it is a yearly operation or a monthly operation or seasonly operation it may be a monthly operation or it is a daily operation or a daily operation whatever we have that is a time period that we have and we are classifying the reservoir volume and into several intervals and deciding reservoir release for each period with respect to an optimal criterion that we have what we have fitness function or objective function what we have and here how we will represent the it is similar to the ACO the links between initial and final storage volumes at different periods form a graph which represents the system and determine the release at that period that means if we have the time period T the ant is here to move to next step may have it may move any direction any interval or any discrete value that initial minus final minus initial it will be change in storage it will give that how much volume as taken for release or whatever the law says that we have those things will be using that is a representation of the problem and the other principles what we required we required to model that if this is a ant is at storage class 2 it can have which can move for any storage class in the next time it time step and how it is selecting based on this rule we can model this selection of this path that means this is a basically argument what we have maximize that is L belongs to allowed key allowed key means based on our heuristic knowledge again we can restrict to a specified number of whatever the possible region that we have based on that we can restrict that is a map allowable key that is allowable storage classes these are always up here that is what we have the from in value this eta is the heuristic information that what we know about the problem if we hear the problem is if we want to we can minimize our deviations here I will see it next slide okay here the here is the information it can be here what we have seen here the deviation between the release and the demand you want to minimize that that we are using as heuristic information for this algorithm and again this tau is the basically it is a firm in deposit or that is what we call firm in trail value that we have firm in trail web dating here this is up at next time step what we have at previous or current time step and this is a change in firm in trail value based on this path tracking how many ants move in that path based on that magnitude it will be changing this that means a solution quality here you can see that change in firm in value is given by this this G is a basically fitness function which is with respect to global or which one is having a better fitness among these whatever the population we have may among them it is the best solution based on that we are changing this fitness value and it may have the sometimes it may use very different variations of this firm in trail updating rules sometimes it may be local updating and sometimes it may be global updating those two principles also we can use and this transition probability where we will use here the same thing this is a argument if it is satisfied this is a q q naught is a parameters q is a some parameter that we will choose free specified parameter q is up some random number that will generate and then we will test it this whether we want to select with this function or we want to select this function this is the basically J is a node that is chosen based on this transition probability what we have here this is a again we have the same terms from in trail value and this is a heuristic information at that particular time path but here that all this possible parts that is that's what it gives this allowable parts that is possible parts it is this is what and this is what gives for this transfer probability again this transition probability again we can select based on the standard procedure so as I told this is also having the standard steps like initialization of the population and initialization of pyramid and other parameters that we required in this algorithm that will initialize and then we'll construct the solution what we have the again whatever the problem we have represented based on that we will position each and in a starting node then for each and we will do the computations that means we'll compute the heuristic information and we'll choose the next node by using the state transient rule that what we have seen it in a previous slide that is selecting based on transient probability or based on this argument based on that we'll try to select into next time step and then repeat for all the ends and we'll try to find the fitness value after performing for completing one cycle for and and then we'll update the pyramid trial value based on this based on the fitness function and we'll apply the whether if it is a required local updating or pyramid trial will apply whether if it is required only this global updating pyramid trial value we'll apply that and we'll try to update this pyramid trial value by adding those to the previous expression what we have this this equation we'll try to update this and finally we'll check the termination criteria and if it is satisfied we'll whatever the solution we find that is a that we will take it as a best possible solution and this is applied for a case study of iraq and reservoir of pressure which basically solves for flood control irrigation and power generation and also for drinking purpose that also only purpose but it is a small quantity as compared to this reservoir storage it is a very small quantity negligible so in this model it is neglected and that there are two power houses here and here what is and the object to be function is the minimizer deviation for whatever the demands we have for power production we want to minimize other constraints are handled by putting as other objectives are handled whatever the purpose we have irrigation and other flood control that are put it as constraints and then we make it converted into single object to optimization and we solved it okay if we applied both the anti-colonial optimization genetical growth models and we you also seen that this anti-colonial optimization is performing better for a large-scale problem that means if it is having a short-term reservoir operation problem it is similar kind of performance we can find but it is having a large-scale meant at a time for planning purpose we may need that we can operate or we can find the starting policies or reservoir operation policies by optimizing over a large number of time origin that means it may be in 10 years we can optimize at a time if these these are the possible inflows and how best we can utilize among all these time periods so that's what we find this anti-colonial optimization is giving better results and we can adopt this evolutionary computing techniques to multi-objective optimization where we had to in a better way we can optimize as compared to any conventional model modeling or conventional optimization techniques these are basically having population-based such techniques so so they will give some advantages while opting for generating alternative Pareto optimal solution that what we have what we where we have okay basically here we can see a multi-objective problem where we want to maximize or minimize a number of k number of objectives within a restricted such space or decision space what we have and it may be subject to a number of constraints it may be equality constraints or inequality constraints but we want to optimize a number of k number of objective functions and basically what we will we may not find the single solution as the best possible solution what we can do we can only do we can only perform what are the best possible alternatives that are possible for with respect to all these objective functions so the best principle that proposed is Pareto optimal principles that is proposed a long time back in 1857 or sometime and where we Pareto optimal solution we can define as a point extra belongs to feasible such space is a Pareto optimal solution if there does not exist any feasible point x belongs to you such that one solution is dominating particular that particular solution then we are calling it as non-dominated solution that's what we are calling it is a Pareto optimal solution so it has to satisfy both these two conditions it should be at least as good as other solution or it should be it should not dominate so that means at least for one object it is better solution then we are calling it as non-dominated solution so these are the basic conventional approaches that have been used in the past and weighted some method constraint method gold program and several other methods are there and here what we have if it is a two objective function what we have here this is the word or Pareto optimal optimal solutions at Pareto optimal front what we call what we waited on some approach what we will get at each time we may give some weight as to this object to function and second object to function and then we'll try to find out one solution at a time and if we want to get a large number of alternatives we have to repeat the changing weight is and finding the optimal solution and we may get like this several points but the problems again it may be what we may base non-convex object to function will be there so there we may not find the exact how much weight is we may give you how much things we can find out but again it may have disconnected object to space sometimes it may have irregular object or disconnect means it may have may not be as smooth as what we have seen it here it may have some jigsaw kind of thing so there it may be it may not be possible to find out a smooth car using these conventional techniques and obviously it requires many simulation runs so this what we have seen evolutionary computer techniques as population-based test techniques which have the which at each generation they are finding several number of optimal solution or suboptimal solutions those principles we can use to generate a better parallel to optimal front that means at each generation we are in a single run we are trying to find out a number of solutions which are non-dominated solution that's what we'll do in multi-object to evolution algorithms here the MOS deals simultaneously with multiple solutions that is that's what advantage of this population of solutions and it can find a several trade-off solutions in a single run of the algorithms instead of a series of separate runs in them as in the case of traditional optimization that is giving the flexibility MOS are less susceptible to structural forms of solutions that means what we have there disconnect a pair to print such kind of thing or non-convicts whatever the objective function we have so there is a this is a problem with the conventional techniques so this will give some comfort that we'll trying to find out the whatever the model that is giving the solutions so it is having a better chance of finding the non-dominated solution that will give the confidence and the MOS attempt to to find the solution to extremely complex time-consuming real-world application what we may not approximate the whatever the real-world problems that are taking place we may not be possible to use a conventional modeling techniques but we can use some a set of equations we can simulate the model and we can put into a evolutionary algorithms and we can find the solution to these problems so there give that gives feasibility of this a feasibility for the problem that I have given and as MOS use a little problem domain knowledge and can generate a good distribution of diverse solutions and MOS also have the implicit parallelism what we we can use if it is a complicated problem or it is a time-consuming problem we can simultaneously use several computers to evaluate the fitness function of whatever the simulation model that we have we can run it in several computers and we can use MOS to find out the multiple Pareto optimal solutions so these are the some of the LTSMOS strength Pareto evolutionary algorithm Pareto Archive evolutionary strategies non-dominated sorting genetic algorithms and also we propose some multi-objective particles of automation algorithms and etc there are several variations are there so there is a wide scope of research for this MOS application are also in development of this improving the performance of this multi-objective evolution algorithms and also it is having wider applications in any field it may be a civil engineering field or mechanical engineering or whatever the field we have taken so that's what thank you