 4 and if I have 5 it is going to be called recursive of 5 and recursive of 4 is going to be called recursive of 3 and recursive of 3 is going to be called recursive of 2. The recursive of 2 is going to be called recursive of 1 and if you go back to the program here what happens if n equal to 1 it is going to be turned off. So, the answer is essentially it is going to return, but now how many times is this program executed? 1, 2, 3, 4, 5. So, time complexity of this algorithm is then m equal to 5 for example, time complexity is 5. Therefore, it is order n, ok. Now, let us go back and just change this. Suppose I take this here minus 1, suppose I just follow this, let us see what happens to this time. So, what is happening now? Again let us start with 5, 5 is going to call recursive n minus 1, 4 is going to call, because there are 2 calls now, I have 2 calls right, recursive n minus 1 plus recursive n minus 1. Now, 4 is going to call 3 and 3, 3 is going to call 2 and 2, right, ok. Let me go back to this problem, ok. So, initially what did we have? Let us look at this problem here, I just want you to do not worry about what this function does, ok. Initially this was the function, recursive n equal to 1, return 1, else return recursive of n minus 1 is what we have, ok. So, now what we did was we performed the time complexity of this algorithm, then this is what I did. Suppose I was calling recursive of 5, then recursive of 5 is calling recursive of 4, 4 is calling 3, 3 is calling 2 and 2 is calling 1. So, if I look at a time complexity of data, every time now let us see how we do, how we perform time complexity in this particular case, ok. So, this operation is order 1, this operation is order 1 and this is also order 1, ok, but this is being called again in the, right. So, when you analyze recursive programs, what are we going to do now? I am going to say that this is calling it this, this is going to be order of n minus 1, because it is going to call what is happening here, this recursive of n minus 1 is going to be called, right. So, what is the time complexity of this, let us leave it as a question now, then what is happening now, let us go back to this example. So, if you look at it, what are the time complexity of each execution of the recursive function, it is max of order 1 comma order 1. So, what will this be, because this is like a sum of two programs, so the test here and then there will be another two, these two statements make up parts of this program, let me call this p 1 and let me call this p 2, both are executed, time complexity is order 1, ok. But here there is a problem, if this is not satisfied, it is calling recursive of n minus 1. Now, what is recursive of n minus 1, again it is going to execute this, that means it is going to do n minus 1 times order of 1, ok and when will it terminate, it will terminate when n equal to 1. So, when I am looking at the time complexity of it, the first time around recursive of 5, let us say cost c, recursive of 4 also cost some time c, c, c and some time d, ok. So, why do I write this as d separately, primarily because here it is n equal to 1, return 1, otherwise there is something else that is happening here and we assume that this cost is order p, whereas this cost for each execution of the p, if n is not equal to 1, just this statement is going to cost c, the calling that, just the cost of calling the recursive function. So, when you look at it, what is it that we are seeing here, if n is the number of 5, if you notice there are 4 c's plus a d, ok. So, the total time complexity of this, if n is the number 5, it is of the order of n, ok. Now, what I did was, I said I am going to change this guy, if I change this guy such that all that I am going to do is, I am going to simply make, I have n minus 1 here, I am going to call this recursive of n minus 1 plus the recursive of, now let us analyze the time complexity. Now, if you look at the way this is being called, I am calling the recursive of 5. Each call to recursive is making two recursive calls here, it is calling the recursive of n minus 1 plus recursive of n. So, what is going to happen now, we are going to do two calls to recursive of 4. The recursive of 4 is again going to make two calls to, each recursive call to 4 is going to make two calls to recursive of 3, here again two calls to recursive of 3, this two calls to recursive of 2 and so on. And this one finally is going to make 1 1, 1 1 when it will terminate. Now, what is the time complexity of this? The previous function recursive was ordering, notice that there is nothing login over there. Now, if you count the total number of operations in this, 1 2 3 assuming that this is, this cost is C C C just like before, C C C cost of each one of these, this is D, this is D, this is D, this is D, this is D, this is D and so on. What is the overall cost that we see here? How many do we have? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15. So, that 15 C's, what do we have here? 1 2 3 4 5 6 7 8 9 10 16 D's. So, now if you look at the time complexity of this, now this is of the order of 2 to the power of 5 rather than linear. So, that means it is of the order of 2 to the power of n. So, the team assuming that both C and D are approximately the same. We have 31 computations that have been made. I have made a call for n equal to 5 and we are having order of 2 to the power of 5. Actually, what are we having? 16 plus 51 is 31. Both are constants. Let me make both of them the same. Let me call it C here. Therefore, there are 31 order 1 computations which is approximately which is equal to 2 to the power of 5 or the time complexity is of the ignoring of constant, the time complexity is of the order of 2 to the power of 5. So, we saw one. Notice that I want you to appreciate this. This is a very important example. If you notice here, we had first only this part. Let us say we had only up to here. Then what we did was I removed that listing and then I added this other recursive term. Just by increasing the recursive call twice, the time complexity which was ordering when only one recursive call was there becomes order of 2 to the power of 5. Are you understanding this? Is this clear? The person who asked the question? Yes. It is here. Time complexity can be linear. It can be logarithmic. It can be logarithm plus linear whatever. For example, you can have complexities of this kind. It can be for example, order of n plus log n. Any of these kinds of things can be important in terms of complexity. So, for example, if you look at something like selection sort, I think I have given you something which is n log n. If on the other hand, if I was doing a simple sort program, which is an ordinary sort. Let me say I want to do sort. If I was doing selection sort, what I would do is I would start there for i equal to 0. Some variables are not defined. Ignore it. All of you know what selection sort is. Selection sort is simply this. If I have let us say 4, let us say I have 4. What selection sort essentially does is it compares the first element with the other three elements that are there. Here 2 is the smallest element here. Let me spy right. So, what I will do is this is the 0th element. This is the first element. Second and third element. So, what is going to happen now? Six will get. Four will get. Four is assigned the value of small. Then four is compared with 6, 5 and 2. The location at which the smallest element is present is 3. Then what do we do here? For j equals i plus 1, j less than n, j plus plus. If a of j is less than small, let me write it down. I define log equals j and you proceed. And this completes the formula. Then what we do finally is that I exchange x with what we essentially do is I set the value to 2. So, what are we doing? We are simply exchanging. What are we going to do here? We are finding the location of the smallest element, exchange it to it. And this part is sorted. Then you go to the next element, find the next smallest. So, what will happen here? Four will be here after the first pass through there. What are we going to have here? We are going to have will be here and four will be here. During the next pass through the array, this is the smallest element. We are starting with small equal to a of 5. I compare with this. Four is the smallest element. Then we exchange these two elements. So, what do I do now at the end of this? For example, now six will move from here. So, you have five. It compares and remains same. So, this is how the sorting is going to happen. So, let us look at the complexity of this one. If you look at it here, there are three statements. i equal to zero, i less than n, i plus plus. All these three statements, because each one is going to take constant time, constant time, constant time. The time complexity of the four statement is part of it. Time complexity of the statement is part of it. And complexity of the statement is part of it. Similarly, I can combine all of these because these like, once whole as a sequence of the set of programs, this will be all right. But what do we see here? This four loop is going to be maximum executed up to a few times. Okay? First time it is going to be n minus 1, second time n minus 2 and so on. Therefore, when I look at the, if I look at times of, let us do that. The outermost loop, ith loop for example, if I take, first time it is executing n times, then it is doing n minus 1 and so on. And this one is executing n minus 1, n minus 2 up to 1. Okay? So, this is exactly what we are going to see. So, now, when you look at this over here, what does, when you look at the time complexity of this, what we can say is that, if you look at this particular program, the segment of the four loop, including all of this, has a time complexity of 1. But it is in this particular form. There is a program within a program. The four loop declaration tells you how many times it is going to get executed. P2 is a time complexity. So, if I look at the time complexity of the four loop, how many times it is getting executed? It is getting executed n times or n or n minus 1. We do not distinguish between n minus 1 and time complexity of P2. What is P2 now? It is getting executed 1. So, that means we go back to this program here. This is P1 and this whole thing is P2. So, P1 into P2 is the time complexity. Within this, this has a P3, which is again constant. So, P1, P2, P3 is the time complexity that we have seen here. So, when you are looking at this particular program, you are going to get order n into n, which is order n. So, we saw three kinds of problems. One which takes order n, another one which takes order 2 to the power of n, another one which takes order n square. Let me show you one that takes log n. Let us say I have I equal to 1. And I say by I less than n, let me say n is sum of t is equal to 2 star n. Can you tell me more? I have written a small segment of proof. What is the complexity of this part? I am asking you a question now. What is the time complexity of this proof? Can you answer me this question? I am asking you a question. The person who asked me in the long written, can you tell me what it is? So, this part, if you look at it here, what is interesting is, how many times is this loop executed? So, if you look at I equal to 1, n equal to 15. So, the first time when I look at the value of I, it was 1, then it becomes 2, then it becomes 4, then it becomes 8, then it becomes 16. So, when it becomes 16, it does not satisfy, it comes out. So, how many times is being executed? The loop is being executed 4 times. Now, suppose I make n is equal to 32, or n equal to 31. Of course, I make it n equal to 31. Then, how many times will that execute? So, I do not want to do a monologue. I will do 5 times. So, now in terms of n, what is the time complexity now? 2 raised to the power n. Not 2 raised to the power n. It is order of log n, because n equal to 15, order of log of 15, if I take the roof of it, for example, it is going to be what? 2 to the power of 4 is 16, right? Therefore, this will be 4. Similarly, for 31, if I take the order of log of 15 to the base 2 roof of 10, this is going to give me 5. So, this one is a program whose time complexity is order of log n, although there is a while block here. The previous problem for example, why was it order n for each of the loops? Because we are incrementing i by only 1 over here, because here i less than n, i plus plus, where every time we are only incrementing i by 1. Therefore, the loop goes through outermost loop, runs n times, in the most loop, inner loop for example, first time it will run n minus 1 times, next time it will run n minus 2 times and so on. And when you take the sum of all of that, it is of the order of n square, ok. Are you clear? So, this is what time complexity is all of. Most important in time complexity, when you write your code, please be very, very careful. I would like you to look at this particular example again. This recursion example is a dangerous example actually, this one. For example, if I look at recursive, only one recursion call like we did, for example, let us say i had only with the yellow, ok. Then it was order n, but as soon as I added one more recursive call, then it became 2 to the power of n. So, it can become exponential type of complexity. And if you look at it, the easiest way to compute time complexity, my advice to you would be simply, especially recursive programs is there, the difficulty is just drop, this is called the recursion tree. This is called the recursion tree, ok. Simply count all the operations. For each operation, you assign some constant or whatever depending upon what is being done with it, ok. For example, in something like quick sort, what are we doing? What are the time complexity? Can we look at the time complexity of quick sort? What happens in that? I am asking you a question. Order of login. It is not order of login, it is order of n login at best that way. Mysore, you can also write it in this particular, I will just show you one. In quick sort, let us take this, ok, Arie, what are we doing? I do not remember the exact tool, but nevertheless. Let us say I am looking for this, let us say we have this. So, what are we doing quick sort? Let us say I am choosing the middle element as the pivot element. Then what are we doing? What do you ensure in quick sort? Make it compare with the other elements, the other successive elements. Not other successive elements. You compare the elements on this side of the pivot element with that of the elements on the right side. And then what do you do? You exchange them if they are out of order with respect to the pivot element. So, what do you see over here? Notice that 12 is the pivot element, 8 is smaller than 12, so no problem. So, you move forward, 15 is larger than the pivot element. So, it is on the wrong side of the pivot element. And then you do the same thing on the right hand side. 13 is larger than 15, so no problem. But 4 is should be on the left hand side of the pivot element, which I come from this side. Actually, I start from here. Then what I do is I have one pointer which starts here, one pointer which starts here. 8, for example, oops, 8 is in the correct place, no problem. With respect to 12, it should be on the left hand side. Now, here 11 on the other hand should be on the left hand side of L, but it is on the right hand side. 15 should be on the left hand side of, right hand side of 12, but it is on the left hand side. So, what do I do now? I exchange 15 and 11. And I keep repeating this until the pointer starts. So, can you tell me now, that is one pass through the end. So, at the end of this, every one of them being compared, I will have something like this. 8, I will have 11 here and I will have 6 here, which is just fine. And 12 will move, 4 will come here. Let us look at this one pass at the end. So, after 1, the first time around, 8 is remaining as it is. 15 and 11 gets exchanged, 11 comes here, 15 will move. Then 6 is on the right side with respect to 12. Now, 12 and 10 are coming. 10 will come here and 1 at the end. 12 will, 10 is smaller. 12 will be, what did we take? 1, 2. These are the indices. 4, 5, 6. So, then what are we trying to do with here now? So, 8 is compared with 11. So, 8 is on the left hand side and 11 is on the, I will get. So, 11 should have been on the left hand side, but it is on the right hand side. So, you wait there. Then you go to the next number here. So, you move this pointer from here to here. Then what happens? 15 and 11, because 11 should be on the left of 15, we exchange the two elements. Next, what we do? 6, 6 is on the correct side of 12. So, I do not have to do anything. Then here, I am at 10. So, then what happens? Now, I am stuck here, I wait here. But then what happens? 10 is on the wrong side of 12. So, what do I do now? I move 10 here and I move 12 here. Then what do I have? Now I have 13 and 4. 13 is on the wrong side of 12. So, 12 comes here and 13 goes away. So, now what do we have? Then again I have 12 and 4. So, 4 will come here and 12 will go away. So, what this fixed sort essentially does is that it will. So, finally, when I am looking at fixed sort, it will give me 4 here and 12 here. Now, if you look at the right hand side of 12, all the elements are larger than 12. Left hand side of 12, all the elements are smaller than 12. So, it approximately divides the array into two parts. Now, my question to you is during one pass. Now, we have completed one pass through the array. During one pass through the array, can you tell me how many elements were seen? Can you answer me this question? Do you understand my question? Can you please repeat? See, we started out with this array. Then what did we do? We had one pointer i here, another pointer j here from which we started. Then what we did was we compared the two elements with respect to the pivot element. 12 is our pivot element, let us say. Then what we did was as long as the numbers were less than 12, we let them be as it is. If they are greater than 12, less than 12 on the left hand side. And it should be greater than 12 on the right hand side. Now, what happens is on the right hand side I have 11, which is smaller than 12. Therefore, it has to move to the left. So, what do I do now? When I move i by 2 i plus 1, let us say i equal to 1, there is 15 here, which should be on the right hand side of 12. And then what do I do? And 11 should be on the left hand side of 12. So, what do I do? I extend both the numbers. I keep repeating this, including the pivot element can move. So, let us say next I came to 6, 6 was in the correct place. Then I am looking at 12 and 10. 10 should be on the left hand side. So, I exchange 12 here, then 12 moves there. And then again 13 is on the wrong side of 12. Because if you look at it here, where was it now? At the end of it, after that operation with let me say I had 8, 11, 6, and 10 here, 13 here, 4 here, 12 here, and 11 is what we had. Now, again what happens when I look at 13 here? It is on the wrong side of 12. So, what do I do? I exchange it back again and give me 12 and 4 here. And again I exchanged 12 and 4 back because it is again on the left side. So, at the end of one pass through the area, it divides it into two parts. In this example, it divides into 5 and 3. So, my question to you is, once I have completed one pass through the area, what do we have? With respect to the pivot element, we have that all the elements on the left hand side are smaller than the pivot element or equal to the pivot element. And on the right hand side, they are greater than the pivot element. My question to you is, how many elements were seen during one pass of the area? Do you understand my question? Yes. I went through one pass. It is still not sorted. So, now what is interesting? Quick sort what do we do? We go through this and we simply, what I will do is, I look at this over here. Now only these elements need to be sorted. These elements need to be sorted. Okay. Once again, we find the pivot element, repeat the same. Question to you is, during one pass through the area, how many elements were seen? Five elements. All the elements were seen, because I compared this with this. I compared this with this. Every element in the area was seen. So, if I was going to write a recurrence equation, I would write this that T of M, 22, 22. Okay. Or T of N minus I plus T of I. Are you on this? All the elements of every element of the area, what is the complexity of quick sort is going to be. If we are clever and do it, for example, if the array is a lucky array, the good array, I will have T of N is equal to 2 T of N by 2 plus C times N. So, what does this time complexity lead to? What is the time complexity of this? I am asking a question now. Sorry mom, I don't know the answer. See basically, what happens is, notice that every time, so now let us look at the best case. Okay. In the best case, I am going to have something which was eight elements long. And let us say it got divided into two halves, four, four. Then it became seven. And finally, it was one element long. Okay. This is how the recursion in quick sort will work. If you look at the algorithm which was done in the video. So, now what is interesting is, every time in quick sort, the first time all elements are looked at. Next time, although there are two different parts, again, all elements are looked at. The third time again, all elements, all the eight elements. And fourth time to all the eight elements. What is the depth here? This depth, for example, if N is equal to eight, how many times? One, two, three, four. Four times the elements are looked at. Therefore, it is log of eight to the base two. And each time it is being looked at N times. Therefore, this eight log of eight to the base two. Therefore, other is of the time complexity of quick sort is in the best case is order N log N. Best case. Okay. What is the worst case time complexity of quick sort? I am asking you a question. I am asking you a question. I don't know the answer. Let me go back to this example. Okay. What will happen in this example? See, if you notice here in this example that I worked out, I tried to work out for you. It divided instead of four, four, it divided into five elements in three. In the worst case, quick sort can do such that divided into seven and one in the first class. Oops. And one. In the second class, now this guy is already, one is already sorted. This becomes six and one and so on. So, how deep will it get? If I did that, what I am saying is they can be a scenario where I have this eight elements and the pivot element is chosen such that I have one element here. All elements to the left are smaller than the pivot element. Only one element is larger than the pivot element. Then I let's say I get here, this is seven and this is one. Then the next time around, again I have chosen another pivot element on the seven elements and again it divides into six and one. Then it keeps doing this. How deep will this go? Five and one, four and one, three and one and so on. How deep will this become? If you notice in the previous case, I said if it is divided exactly into two, the depth is login. For the other hand, every time is dividing the array into one element and the rest of the elements. One element and the rest of the elements. Then how deep will it get? I am asking you a question. Let's go back to this example. First time it was 4-4, right? Then it became 2-2-2-2. I choose because what is what I am going to do now? When it becomes 4-4, again I will choose the pivot element and then make sure that elements on left and right are smaller than the pivot element on the left and greater than the pivot element on the right. I will ensure. Same thing I will do with this. The next time around it will get divided let's say into 2-2, all four of them and then of course finally divided into one element. But if it so happens that the pivot element chosen is such that the first time seven elements, one after this first pass through the array. Next pass through the array, six, one. That one is just left as it is, there is no recursion on it. Because already it becomes one element. This becomes now I went one. This becomes four and one. This becomes the end one. This becomes two and one. See if you look at the depth of this for example, it's as long as the array is. So that is when the time complex P becomes order. Every time we are looking at all the eight elements. Whatever the size. So what happens is in such a case, time complex. As I said, quick sort is an algorithm where every pass through the array looking at all the elements. I looked at n elements here, eight elements here, eight elements here. However, the elements are divided. We are going to keep looking at all the eight elements. So what's going to happen is that the depth is going to become n or 8. And every pass I'm also looking at eight elements. Therefore, the time complexity is order n square. Is this clear now? So the worst case complexity of quick sort is order n square. But you may ask the question, why is quick sort called quick sort? It's not a quick sort algorithm. Quick sort should do something quick sort. It's not doing anything quickly at all. If the particular elements are chosen such that it is always choosing the pivot element wrongly, then quick sort performs very well. But what is interesting is that there are, in a probabilistic way if you look at it, if I take n symbols and look at the permutation of the symbols, the probability that you choose a pivot element which is always going to be, for example, the largest element or the element which is one smaller than the largest element, probability is very low. So what happens is what still works is the average case time of order n square. That's why it's called order n log n. That's why it's called... What I'm saying is this kind of a scenario where I'm having 7, 1, 6, 1, 5, 1, 5, 1, 1, 6, 1, 1, 1, 1, 1. It's 4, 1, 1, 1, 1, 1. It's very, very... Probability of this happens very, very quickly. And you can actually show that it is a very good time problem for the fixed order algorithm. Okay. All right. So it's actually a quick sort algorithm because the probability of the aggregating divided like this always one element and the rest, one element and the rest. One element is very, very good. It's very easy. All right. Are there any other questions? What are the various garbage collection techniques and data structures? What do you want to do in garbage collection? Basically you want the cleaning up to be done, right? Are you talking about with respect to the garbage collection schemes that there are in Java? In data structures, there is no specific garbage collection. You keep track of all the pointers and you clean up yourself. In Java, a specific language, what is done is it looks at when you... See what is... What is garbage collection all about? You want to get rid of... There are... You have done a mallop. Okay. You make a mallop and you know allocate it some characters, some memory to one another. Now what you want to do is be... You forget to free it. Ideally you should free it. Okay. If you don't free it, then let's say, let me give you an example like this. Yeah. Let's say I have a function here. Suppose I have, you know, in... Let me call it find something. And some arguments. Suppose I do some a is equal to some. I've done a mallop in this function. Okay. Then what is happening is let me not take int a. Let me say there is a int b here. Start b. I've done this to b. Because a is b. Passes argument. I still have a reference. Suppose b was done like this. Okay. Then let us say I define an array b. And then I did the allocation point. Okay. I did a mallop. After doing this, let us say I came out some... I had something. I returned something here. Return some value. I wrote a lot of code. I came out. But I forgot to free b. I did not do this. Okay. Then what do we have now? You have an array which has been allocated, but has not been freed. And then you have no connection to the array because this array b is not being passed this argument to this so they can free it out. Okay. So this is something which is lost. So what Java tries to do is that... I hope I'm answering your question. I hope this is the kind of question you're interested in. What Java tries to do is this is called a dangling quantum. This b here is a dangling quantum. It has no connection at all. Okay. So what it does is it looks for all of this and at regular intervals of time it cleans up the memory. But I must warn you, although Java loves this, this is a very, very bad programming practice. I generally advise that you always remember what you have allocated and if you don't need it anymore, always free it. I hope this answers your question. Okay. There's nothing like garbage collection in data structures as far as I... from whatever I'm doing. Do you have anything else in mind? So garbage collection is a specific facility that is available in some languages where all these pointers which have lost their connections are cleaned up automatically. But as I said, my advice to you is this is not a good idea. It should not be that. Okay. Because what happens is even in Java, you'll find that if you run out... normally garbage collection in Java, for example, happens only when it reaches some limit in the memory. So many times if you're doing something that is interactive, you will find that your program simply freezes cleaning up the memory of all these bad pointers. So general advice to you is when you hold, you allocate, you allocate, you deallocate. Always remember to do that. Okay. I will be on this. Are there any other questions? Dronacharya seems to be having questions. Yeah, okay. Yeah. Dronacharya, if you have a question, please come in. Yeah, please sit down. Ma'am, I want to ask you how many stacks are required to implement a queue? You tell me. This is a great question. I know you asked me, but can you think of how to work it? What is the difference between a stack and a queue? Stack is... Fee4 and fee4. Fee4. So what do I have to do? If I have to make a stack... If I have to make a stack which is fee4, well, I'm only having stacks with me, right? So I have to make a stack which is fee4. Then how many stacks would I require? I push things, right? But then when I... When I pop things of the stack, what do I require? I require that the first element which was pushed is what is popped off the stack, correct? Because it's going to behave like a queue, correct? So then what do I need? So what happens? If I have P elements which I'm pushing onto the stack, then I want it to behave like a queue. That means I have to pop off all the P minus 1 elements. All the P elements. But P elements have to be kept. So I need another stack onto which I push it. So to implement a queue using stacks, you need two queues. Is that clear? Okay. Thank you, mom. Understand? Okay. Can you explain the algorithm? What I told you just now back to me, I want to... I'm asking you to repeat the algorithm. You said we need two queues to implement a stack. I asked how many stacks are implemented... It has to be implemented to form a queue. That's precisely what I answered. I said you need two stacks to implement a queue. Okay. Thank you, mom. Okay. Can you explain to me how it can be done? Yeah. I just spoke to the algorithm. I want to see whether you're listening or not. Actually, you told me now if we are pushing one stack on a queue and then we need to push another stack on a queue to implement that thing. But if you say we need a contest, you push two stacks for a queue. Okay. You got it all wrong. Let me just show you what I mean by this. Suppose I want to... Let's say I want to put the following elements into a queue. Okay. C, D, E onto a queue. What do I have with me? I have a queue stack. Okay. A and E and C and D and E. Now if I want to make the stack behave like a queue, which is the element that I have to pop out, I have to pop out the element A. But it's at the bottom of the stack. So what do I do? This is stack one. Stack two. So what I do is I push. I pop this, push it onto the stack. Push it onto the stack. Okay. And then I pop this, push it onto the stack. And then I pop this, push it onto the stack. Then what do I have? I have the element A, which is on the top of the stack. Next I can pop this element, which is because it is on the top of the stack. So you need two stacks to implement a queue. Are you clear now? Yeah. Okay. Yeah. Why do you need the two stacks? Can you explain to me? Because we know that only from a stack, only we can take out the element from the top and put it back from the top. Therefore we need two stacks so that one stack could be emptied so that the element of first stack can be put in the second stack so that we can use the element that are exactly down like. We can use A when we bring that on second stack on the top. Yes. Exactly. Okay. Is this clear now? Yeah. It's been implemented. Good afternoon, ma'am. Yes. Tell me. Ma'am, I want to have a question. How to find the middle value in a linked list with a one pass? Only traveling to one only time? Only one time you're passing. Let me just think. Yes ma'am. Do you know the number of elements in the linked list? No, ma'am. I need to keep track of the number of elements which are before it. So let me just give you a theme term that I think I'm not sure absolutely, but if you start with let's say the first two elements and take them to be the same thing and then count how many elements are there in between them and keep them keep track of the middle number every time. So if I'm just thinking about it, I feel you could do something of this kind I'm not absolutely sure. Let me now in this list, for example, we would require to get 1, 2, 3, 5, 7, 8 and 15 is what we have. We want the number 5. Then what I'm wondering is I start with 5. I assume this is the beginning and 15 is the end. Then I get 3, 5 and 15. 5 is the middle value because I'm just looking at only one element every time. Then because I know there's only one element in between. Then I look at 2. 2 goes before this. Therefore 2 is before 3. That means there are two elements before 5 and one element after 5, which is still 5 may be the middle value or 3 may be middle value. You can keep repeating this 1, 2, 3, 5 and 15. Then here now 3 becomes the middle value, 1 which is higher than this. But to keep both of them. See what happens is every time I add an element, that element should go before the middle value or after the middle value. That's what we need to keep track of. And the middle element also moves. Just like in fits. You cannot go anywhere back and forth. Are you sure of this question? Are you sure that this question was this question given somewhere? I'm asking your question. Yes ma'am. Where did you get this question from? Audible name. Where did you get this question from? I have in mind ma'am. In your mind. Yes ma'am. I have a feeling you need to keep track of too many things here. Sure ma'am. Even if it's an array for example, you can't do a partition sort. In fact that is how it's done for quick sort if it's an array. Link list I'm not absolutely sure whether it can be done. Okay. There's no way of going back and forth unless it's a W link. As far as I know on the link list you may not be possible to find the middle value unless you can go back and forth. I'm not sure. But an array yes, you can do it. You can find the middle one. Okay ma'am. Yeah. Let me think about it. If I find the solution, I'll ask them to post it on the phone. Thank you ma'am. Yeah. Are there any other questions? Good afternoon ma'am. Yes. Yeah. Ma'am what is the use of keyword this in string? Keyword. This. This. Keyword this in. This we do not use in CSU clusters. This point in string. This pointer. This pointer for example is primarily to point to the current. Current instance of the given object. Think to the current instance of the given object. And when to go for recursion and when to when for iteration. Always iteration is preferred over recursion. For a simple reason, iteration is very expensive in terms of memory and the amount of bookkeeping that you have to do. But recursion comes very naturally for us to write. Okay. Always for example, what we would do is you can write a recursive function because it's very natural and easy to write. Then afterwards you try and convert it to an iterative function. Okay. Because the home keeping that has to be done on an iterative function in terms of for example, if you're using recursion, you can write the state of the function should be stored in memory. That is the problem. Every recursive call till the last call has to be kept in memory. That's the problem. Okay. But some functions cannot you cannot help it. Okay. Then any other questions? CSIQ, Dronacharya. Any questions please. Do you have any questions? Good afternoon ma'am. Yes. What is generic pointer? Generic pointer, I don't know. I don't know what a generic pointer is. Where do you hear the stuff? Ma'am, what is generic pointer? In what? In C++. I'm not sure. I don't know. Ma'am it is denoted as void star. Void star. P is a pointer. Okay. Okay. Void star P. What is it? Basically you can typecast. You can typecast such a pointer to a character and integer to what? Void star is normally used. Because it has no specific void. It has no particular type. But a good point is but it's actually not good to do that in a pedantic compiler like your exam is going to be. You cannot do that. You can't typecast anything to anything but sometimes it's convenient to do that. Okay ma'am, thank you. It's not a good idea because what happens is it's like an address. You're simply typecasting an address. Okay ma'am, thank you. That has changed. Good luck for your exam.