 Hello everyone, myself Mr. F. R. Sayed. I work as an assistant professor in the department of computer science and engineering at Walton Institute of Technology, Saulapur. The topic for my today's lecture is dynamic programming the general method. So, we are going to see what dynamic programming is. Now, at the end of the session the students will be able to describe the general working principle of the dynamic programming. What is dynamic programming? It is one of the problem solving approaches. The first important thing why do we require dynamic programming? Now, let us consider a function for finding the nth term in the Fibonacci series. This is a function named as fib n where n is a parameter. Now whatever parameter n is passed to it if its value is less than equal to 1 then return the value of n. Otherwise return this value or the sum of the value where the two values are nothing but recursive calls to the Fibonacci function by passing the value of n minus 1 as well as n minus 2 into different recursive calls to the function. Now this is a procedure called tree for the Fibonacci function where the value of n passed is 5. So, as you can see when Fibonacci 5 is called it makes recursive calls to Fibonacci function by passing value of 4 as well as passing value of n as 3. Similarly, this fib 4 will call itself recursively by passing value of n as 3 once and then as 2 and for fib 3 it makes a call to fib of 2 and fib of 1 and this goes on and it stops at a point when the terminating condition occurs meaning when n value becomes either equal to 0 or 1 as we can see in this diagram. This is a recursive approach to the Fibonacci sequence program. Now as we can see the highlighted part that the Fibonacci function when it calls recursively itself by passing the value of 2. Now this value is actually called 3 times rather this fib 2 is to be calculated 3 times in one single call to fib of 5 for calculating its result. So, fib 2 was calculated 3 times from scratch. So, every time the recalculation of such values will lead to an exponential time algorithm. We are going to focus on the time required for the algorithm to run. Now using dynamic programming such calculated values are used and it may take only time of O of n instead of exponential time. So, that is actually an enhancement to the recursive version of Fibonacci sequence problem. So, this technique of saving the values that have already been calculated and making it use further is called as memoization. So, which is nothing but a concept related in dynamic programming. Now what is dynamic programming? As I have told dynamic programming it is a problem solving approach for algorithms. Now an algorithm design method which is used when the solution to a particular problem it can be viewed as a result of a sequence of decisions. Meaning there are a number of decisions that have to be taken and a sequence of decisions out of that which sequence of decision will give us a proper result meaning an optimal result that is what dynamic programming focuses on. So, an important point related to this is the principle of optimality which we will be seeing further. Many problems that were solved using greedy method can also be viewed like this that is like having a sequence of decisions. We will see some examples. First example is the knapsack problem as we know the knapsack problem it tells us that it has a capacity of m and maximum capacity of m and the elements n elements are to be added into it out of n which elements need to be added to the knapsack is to be decided. So, that is what x i needs to be decided meaning x i is either equal to 1 or it will be equal to 0. The first decision is taken on x 1 then x 2 then x 3 and so on. What are these x i's? x i actually indicate that if x i is equal to 0 that object is not selected to be put into the knapsack. If x i value is equal to 1 means that i object is going to be inserted into the knapsack. Now the optimal sequence of decisions that maximizes the objective function summation p i x i where p i is the profit of object i and x i is an indicator whether ith object is going to be inserted or put into the knapsack or not. So, this objective function is to be maximized so as to get maximum profit and these are the constraints that is summation w i x i should be less than equal to m meaning w i is the weight of object i and x i indicates whether object i is selected or not. So, the summation of the products of w i x i of all the elements it should be less than equal to m meaning it should not exceed the knapsack capacity maximum capacity and x i value is either equal to 0 or 1 as I have already said. This example where we see a decision sequence is that of a shortest path problem. Consider a directed graph with an vertices. Now suppose that the shortest path from vertex i to vertex j is to be found and the shortest distance is to be calculated. Now this is nothing but it is considered as a sequence of decisions to be taken. Now the main task here is to first of all find which is the second vertex in the shortest path meaning in the path that will give me shortest distance which is the second vertex. Then similarly which is the third vertex next vertex and so on and we are going to continue this till the point we don't reach the last vertex j. So, if this the sequence of decisions is taken. So, it will yield shortest path distance from vertex i to vertex j. Now an optimal sequence of decision is the one that will give us a path of least length. Now what could be the different problem solving approaches? For some problems optimal sequence of decisions can be found by making only one decision at a time and never making an erroneous decision. Now this is something that we have already seen related to greedy method. In greedy method only one decision sequence is generated and based upon that decision sequence the decision are taken so as to give the optimal solution. But for some problems if it is not possible to make a sequence of decisions that could lead to an optimal solution then we have a solution of trying all the possible decision sequences. Now in that case we need to enumerate all the decision sequences and we need to choose the best one out of it meaning the optimal decision sequence. So, this could increase the time and space requirements meaning it could take large amount of time and space so as to store all the decision sequences and then to choose the optimal one out of it. Now what is the dynamic programming general method? It is an algorithm design method which is used when the solution to a particular problem it can be viewed as a result of a sequence of decisions. Now dynamic programming drastically reduces the amount of enumeration by eliminating those sequences which cannot be optimal. In dynamic programming the optimal sequence of decisions is found by following the principle of optimality. Now principle of optimality is to be followed by the sequence of decisions so as to yield maximum profit or so as to minimize the cost in short to get an optimal solution. Now what is the principle of optimality? An optimal sequence of decisions it has the property that whatever may be the initial state and the decision the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision. So, thus ingredient method as we can see only one decision sequence was generated whereas in dynamic programming there could be many decision sequences generated out of which the optimal one is to be chosen and till the last step it is not clear as to which decision sequence will give us the optimal solution. Some of the features of dynamic programming although the total number of different decision sequences is exponential still dynamic programming algorithms often have a polynomial complexity. Now one of the reasons could be that the values to the sub problems solutions or the values of the solutions to the sub problems are generally retained. So, that if it is required in future it is reused rather than a recalculation. Now optimal solutions to the sub problems are retained so as to have a recomputing the values this is the point which I have explained. These tabulated values are used to recast the recursive equations into an iterative algorithm. Thus it is an optimization over plane recursion. Next we see the comparison with the greedy method. The greedy method finds feasible solution at every stage with the hope of finding global optimal solution whereas dynamic programming it first of all breaks the problem into series of overlapping sub problems. Second point greedy method it takes decision in one time whereas dynamic programming it takes decision at every stage. Third greedy method it never reconsidered its choices but dynamic programming it may consider the previous state as well. And lastly greedy method its work is based on choice property whereas in dynamic programming it is working or its working is based on the principle of optimality. Now the students are expected to think and write the answer to the following questions. The first question is are the solutions to sub problems in dynamic programming stored? And second one how many decision sequences are generated in dynamic programming one or many? Now pause the video and write your answer. Okay the first question are the values of solutions to the sub problems in dynamic programming stored? Yes those are stored for further use if required. Now an example that we have already seen is of the Fibonacci where if the recursive version is used we need to calculate the value of Fib of 2 three times which increases the time of running the algorithm whereas in dynamic programming we are going to retain these values so that it could be used further which will reduce the time. Now how many decision sequences are generated in dynamic programming one or many? So the answer is many decision sequences are generated but the most important one is the optimal decision sequence is chosen so as to give the optimal solution. This is the reference used for the video lecture. Thank you.