 Hello friends. I am Sanjay Gupta. I welcome you on Sanjay Gupta Tech School. So in this video, I'm going to explain algorithm analysis. So this video is related to data structures and algorithm. So starting with my explanation. So efficiency of an algorithm can be analyzed at two different stages before implementation and after implementation. So they are the following. So first is a priority analysis. So this is a theoretical analysis of an algorithm. Efficiency of an algorithm is measured by assuming that all other factors, for example, processor speed are constant and have no effect on the implementation. So this comes under the priority analysis. Then a second one is a posterior analysis. So this is an empirical analysis of an algorithm. The selected algorithm is implemented using the programming language. This is then executed on target machine. And in this analysis, actual statistics like running time and space required are selected. So a posterior analysis or post implementation analysis is very much important. And this find out two type of analysis. First is time and second one is space. So moving forward. Next heading is algorithms complexity. So here you will understand like what is time and space factor to identify algorithm complexity. So suppose X is an algorithm and N is the size of input data, right? And the time and the space used by the algorithm X are the two main factors which decide the efficiency of X. So here you can see first point is time factor and second point is space factor. So let's understand what time factor is. So time is measured by counting the number of operations such as comparison in the sorting algorithm. So if you want to identify the complexity of any sorting algorithm, so you will be counting the number of operations such as comparison in that. So this way we identify the time factor of an algorithm. Second is space factor. So space is measured by counting the maximum space required by an algorithm. So I hope these two are pretty simple. First is time and second one is space. So time depend upon the steps and space depend upon the memory required by an algorithm. Then last point is the complexity of an algorithm that is function of N gives the running time and on the storage space required by the algorithm in terms of N as the size of input data, right? So I hope you understood briefly about time factor and space factor related to algorithm complexity. Now we are going to discuss them separately. So first I'm going to explain the space complexity. So here you can see the first paragraph space complexity of an algorithm represents the amount of memory space required by an algorithm in its life cycle. So whenever the algorithm will be implemented to any programming language, so throughout its life cycle how much memory space will be required that is known as space complexity of an algorithm. The space required by an algorithm is equal to the sum of the following two components. So here two components are one first one is a fixed part and second one is a variable part. So a fixed part that is a space required to store certain data and variables that are independent of the size of the problem. For example, simple variables and constants used in sorry, simple variables and constant use program size, etc. So this is the fixed part that we found for space complexity. Second is a variable part. So variable part is a space required by variable whose size depends on the size of the problem. For example, dynamic memory allocation, recursive stack space, etc. So the fixed part is static or that can be defined while you are executing any algorithm. And variable part depends upon the dynamic memory location or the stack that we use while applying the recursive process. So that comes under the variable part. So collectively fixed part and variable part provides the space complexity of an algorithm. So here is an example so that you can understand how we can calculate space complexity. So space complexity s of p of an algorithm p is sp equals to c plus s of i where c is the fixed part and s of i is the variable part of the algorithm. So c is denoting the fixed part of the algorithm and s of i is denoting the variable part of the algorithm. If we combine both, then we find out the space complexity of the algorithm p. So here algorithm is denoted by p and s is representing the space complexity. Further it says which depends on instance characteristic i. So variable part depends upon the characteristic of i. So here following is a simple example that tries to explain the concept. So here you can see algorithm is sum of a comma b. Then step one is start, step two is c equals to a plus p plus 10 and then stop. So here two variables a and b are used and 10 is a constant. So here we have three variables a, b, c. So c is also a variable which will be storing the outcome. And so a, b, c are variables and one is a, sorry, one constant is there. Hence sp equals to one plus three, right? So one is the constant and b are the variables. So here three is in the variable part because we don't know whether they will be created dynamically or not. So that's why it is a variable part. So if we add both then it will be equal to the space complexity of the algorithm. So now space depends on data type of given variables and constant type and it will be multiplied accordingly. So also a brand memory will be allocated. So you will be choosing a particular data type or variable. So accordingly you need to identify the space complexity of that algorithm. So whatever variables and constant you are using in your algorithm, they collectively calculate the space complexity of the algorithm. So with this explanation, which is containing two parts, things done variable and with this example, I hope you understood how we can identify the space complexity of an algorithm. Moving forward, second type of complexity is time complexity and which is very much important. So time complexity is, sorry, time complexity of an algorithm represents the amount of time required by an algorithm to run through completion. So this is very simple statement. Then second point is time requirements can be defined as a numerical function, that is T of n, where T of n can be measured as the number of steps provided each step consumes constant time. So for example, addition of two n bit integers takes n steps. Consequently, the total computation time is T of n equals to T into n, where C is the time taken for addition of two bits. So n is denoting the number of steps and C is denoting the time taken for addition of two bits. And if we multiply both, then we can identify the total time taken. And here we observe that T of n grows linearly as the input size increases. So as you increase your input size, it means you need to perform more number of steps so that your time complexity will be increased. So it depends upon the input, like how many inputs are there for an algorithm. So this was the brief explanation related to time complexity. Now time complexity can be measured as per these three parameters. So sorry, time complexity can be identified in three ways, not parameters, in three ways. First is best case, second is average case, and third is worst case. So if you want to calculate the base, sorry, best case, so it is having a statement associated with it, that is minimum time required for program execution. So that will be the best case. Then average time required for program execution that will be average case and maximum time required for program execution that will be worst case. So for an algorithm, you can identify all three cases, best, average, worst, as time complexity. Now let's understand each way one by one. So first one is best case. So best case time complexity, the term best case performance is used to analyze an algorithm under optimal condition. For example, the best case for a simple linear search on array occurs when the desired element is the first in the list. So let's say we are applying linear search on an array and the number that we want to search is available on the first location of that array. So that will be considered as the best case of that algorithm. So if a number of steps or number of iterations are very less, then that calculation comes under the best case time complexity. However, by developing and choosing an algorithm to solve a problem, we hardly base our decision on the best case performance. So we don't consider best case performance because there will not be a scenario every time like number will be available on the first position while applying linear search. It can be anywhere in the array. So we hardly base our decision on the best case performance. So it is always recommended to improve the average performance and the worst case performance on an algorithm. So we need to improve the average case and worst case performance and we hardly consider the best case time complexity because it will be available in rare cases. So second one is average case time complexity. So the average case running time of an algorithm is an estimate of the running time for an average input. It specifies the expected behavior of the algorithm. When the input is randomly drawn from a given distribution, the average case running time assumes that all input of a given size are equally likely, right? So this is regarding average case time complexity. So this is the most desired time complexity among best average and worst. Then third one is a worst case time complexity. So worst case time complexity denotes the behavior of the algorithm with respect to the worst possible case of the input instance. So for example, if you are applying linear search and the number is available at the last position of the array, so that will be the worst case time complexity. So the worst case running time of an algorithm is an upper bound of the running time for any input. Therefore, having the knowledge of worst case running time gives us an assurance that the algorithm will never go beyond this time limit. So this is the upper bound. So best case is the lower bound, worst case is the upper bound and the average case is the average of both best and worst. So this way, through these three methods, we can identify different kind of time complexity for an algorithm. So best case, set average case and worst case. So we normally focus on the average and worst case and we try to reduce average and worst case time complexity so that your algorithm works efficiently. So this way, in this video, I explained you the space complexity and time complexity of an algorithm. So this video will help you to prepare your theoretical nodes related to algorithm complexity, both space and time complexity. And this is very important because in every data structure examination, this question is asked like explain the space and time complexity of an algorithm. So you need to explain everything that I explained in this video. So I hope you understood whatever I explained in this video. And if you want to watch more videos related to data structure and algorithm and the implementation of data structure you can see. So you can go to the description of this video and you will find various links of playlist. Also, at the end of this video, you will find links of playlist related to data structure. So do watch those videos so that you can have command on data structure. Thank you for watching this video.