 So here is the preamble for the question. We know how to search in a one dimensional array. There are two techniques. One is the straight forward linear search. If the array is not sorted, you just look at each and every element, compare it with this search element, find it, you are done. So you would expect on an average n by 2 comparisons may have to be made but it is proportional to n. Search is proportional to n. Maximum n basic search. On the other hand, if you use binary search, you can actually reduce the search time to logarithmic time but the array must be sorted. Now how can we search for an element in a two dimensional array of size n by n? What would be the basic steps required to find the element? A two dimensional array of size n by n. So you have rows in a two dimensional array. This is the 0th row, first row, second row and so on. And each row has n elements and there are totally n such rows. The problem with a two dimensional structure is you can either arrange columns in ascending order or arrange rows in ascending order or maybe something else. But given a two dimensional array, if you want to search for an element in a 2D array of size n by n, what is the maximum number of basic steps required to find the element? What would common sense tell you? The total effort will be proportional to n square, right? That is because if you are searching for a given element g, then the g will be either here or here or here and you have to compare it with every element and therefore in general require order n square steps. Can you do better than this? Order n square is too many comparisons. So can you do better than that? Now consider that the array is sorted in each row and you do it faster. So what would be the number of basic steps required? So you search for g in the first array, first row that is. But because the row is ordered, the worst case, how many searches will you have to make? How many comparisons? Log in to the page. You will have to do exactly the same thing with every row till you find the element. Since each row requires login comparisons, what is the total number of comparisons that you would require? Is this correct? And login to the base 2. Log in comparisons for every row and there are n rows here. Is there a constraint? The constraint is that elements in each row are in sorted order. Is that right? Yeah. So he has a slightly better technique which may be, but can you show that it is less than n log n? We are saying that elements are sorted in each row. n square is too much. We impose a constraint on the input array on the order of elements in a row. Can you search faster? The answer is yes, but it will take n log n to the base 2. He was thinking of an approach with a similar constraint, but it would be still faster. Would it be? That was the question. His contention was that I will first try to identify a row in which that number is likely to exist. So how would you find that out? Suppose I compare the first element and the last element with the given element g. I will know whether g is likely to be within this row or not. But that still amounts to n comparison. And after n comparisons, in fact I will make two n comparisons. No, okay. It is not as if the entire two dimensional array is sorted such that the number here is always greater than the number here. That is not the case. The condition is just that each row is sorted. So yeah, maximum element in each row, the maximum element in each row will always be the last element. No, no. So you are imposing an additional condition on the matrix. State that condition. I do not see how that will help alone because you cannot impose any condition then on the minimum element. And the maximum elements which are there in any row, they need not be in any order. Yeah, so if there are n rows then I am making n comparisons basically. With the given elements I am making n comparisons. Whether I make only with the last element or first element. Basically what you are saying is you want to first determine whether this element is likely to be in this row or not. And then within the row you do a binary search. That is exactly what will be this. It does not matter which way the rows are all. In fact if particular rows are not organized relative to each other in any special way, you will still find it in this. Yeah, so why not you apply your mind and put it on paper whatever you are thinking. You are basically saying they sort the entire two dimensional array. That is not the condition imposed on each row. There is a condition imposed on the total matrix. Okay, so I will just mention what he said and explain why that is not pertinent to the problem at hand. What he is saying is that if I arrange things such that the maximum element of any row is less than the minimum element of the next row. Effectively what he is saying is the entire two dimensional array is sorted in row column order. Effectively I have a single dimensional array of n square elements which is completely sorted. And the time required to search in that array using binary search. This is how much, yeah. If I have n square elements which are sorted, log n square to the base 2 because there are n square elements. How much is this? But I am imposing a condition on the entire two dimensional array not on individual rows. Yeah, so I am sorting the whole array. What is the extra effort required to sort the entire array? That is, even if I use much sort, it is n log n to the base 2 or n square log n square to the base 2. Basically what you are saying is no different than saying that I have a single array of n square elements. But can you do better in case of a two dimensional array where rows are sorted? That's the question. Can you do better than this? So let's put it this way that if every row is sorted individually, then the best you can do is that within the row you apply binary search. If you don't find it, you search in the next row and so on. So you would end up getting an effort proportional to n log n to the base. Everybody agrees with this? This is the best you can do. Anything better than that? So write a program to do this. The assumption is you don't have to write a program to read all the elements of the array and so on. We assume that an array a n by n has been read where each row is sorted. And then there is a given element, a search element s or given element g which you have to search in this array. The point is now you will have to write a program which assumes the underlying structure not to be a single, single, one dimensional array, but a two dimensional array where every row is assumed to be sorted. You can assume it to be sorted in ascending order for example. So write a program now. So what if we only compare the middle element of each row to the number we want to search? So we have to only perform a binary search on half of the row if we even search it. Come again? If we compare the middle element of each row to the number we want to search, then we will have to perform a binary search on only half of the row. That is what is the mid-square search, right? So if we just start that. You are suggesting that I go to the midpoint here and I compare the given element g with the midpoint of first row, then midpoint of second row, then midpoint of third row. So instead of applying binary search on individual rows and proceeding row by row, you want to simultaneously do binary search on all the rows. I don't see how efforts are different. In fact effort in writing that program will be significantly more in my humble opinion. To write that program where you are actually trying to implement simultaneous binary search on n rows. Interesting. Why don't you try writing that program? My guess is it will be much easier to write a program to do a binary search in a single row and repeat the whole process n times to any tree. Sir, in the previous thing that I was saying, where we sort the rows first, my point was after I sort the rows, the maximum number of comparisons I have to make is n minus 1 plus log n, not n log n. How is that? Because let's assume I have sorted the rows such that the maximum max element at the bottom and the minimum max element is on top. I have sorted the rows only according to the maximum elements. Are you sorting columns or are you sorting rows? I am sorting whole rows. After sorting rows, I am exchanging whole rows. What is the effort required in doing that? So effectively, let us break this into two problems first. The first problem is given that we have a two-dimensional array in which each row is sorted in ascending order. That is problem one. Write a program to search a given element. That is the first practice. You solve that first. Next, you tell me how you could do better than this if this is the condition. Now you are imposing an additional condition saying that the rows relative to each other must also have an additional property. To achieve that property, what is the effort required is the question that I am asking. If that amounts to sorting of the entire n square element array, then I am not doing anything significantly better. Because if I have a sorted array which is sorted like this, smallest element, large element, then this, then this, then it is effectively a single n square element sorted array. Then the answer is an oxymoron. I mean everybody knows that answer. You don't need to write. So let us solve this problem first. I hope everybody got a problem. The problem is you have an array n by n. Every row is individually sorted in ascending order. You are given a search element s. And you have to determine whether s is in the array or not. Write a C++ program to do this. Sir, I am asking that if we sort the entire array, it will alone take around n square log n by 2 steps, which is much more than n square steps which we are taking in the linear search for searching the entire array. Here is an interesting question. If I have to sort the entire array, it will take much longer effort than the linear search. So why sort the array at all? Now your answer does not pertain specifically to a two-dimensional array. The same answer can be held for even a single-dimensional array. So it is a very fundamental question our friend has raised. If I have an array of n elements, if I want to locate an element s in this array, the number of basic steps that I require are n. I have to make n comparisons. On the other hand, if I want to apply binary search so as to reduce the search time to log n to the base 2, I have to do the Golagiri of n square effort or n log n effort. n log n is more than n anyway. So why do this? Why sort at all? Why not just search? Any answer? Everybody has seen a dictionary, English language dictionary. You would have referred to it. Imagine the dictionary contains all words in arbitrary order and you are searching for the meaning of one word. If you have to do one search, you have to go through each page of the dictionary, right? That is order n. Would you prefer your dictionaries to be supplied like that? And that is the answer. The answer is that invariably in real life there are situations where the same data has to be searched again and again and again for different elements either by the same people or by different people. Take for example, result of the students who have passed the university examination traditionally published in the newspaper. Every student wants to search my name in that list. So if there are 50,000 students who are listed, there will be 50,000 searches made by 50,000 different individuals. Each individual is doing only one search. But there are 50,000 searches that are going to happen on 50,000 copies of the newspaper or 10,000 copies of the newspaper. Would it make sense to arbitrarily list all the students who have passed? You will always find the list in salted order. So have I answered your question? Sir. But then in that guy's method where we sought the entire array so that is taking extra number of steps for sorting. So that mustn't count in that case. So if it was only a single dimensional array of n elements then that is what you would do anyway because there is no other structure available in that particular array. It's a single dimensional array. Here we are saying that there is a two dimensional array and the exercise currently under discussion is that if there is such an array where individual rows are sorted then what would be the nature of the C++ program which will search a given element. That's the limited problem that we are trying to solve at. So as I said we are dividing into two problems. One is this problem. Solve it, write the C++ code. Next we will consider the other alternatives about imposing additional constraints on the array. Alright, so given an input array it's an array of n square elements, two dimensional array. Each row is sorted in ascending order. Given an element s, search the element s in the two dimensional array. Given a two dimensional array, each row is sorted. Given an element s, you have to search that element s in the two dimensional array in the most effective fashion using the fact that each row is sorted. Anybody who has completed writing the program, come. So Ashu has a program here. Let us hear him talk about it. So in the main program, like I found that So what are you doing here? Basically searching like my search element lies in that row. Which is your search element? N, capital N. Oh, capital N is the search element not s, that is okay. So you are comparing that with? It lies in the row, particular row i, i3. If it is greater than i0 and it is less than i, n minus 1. Then I am following a function binary which is doing the rest of the row. And I am passing the function array n, n, i. So array n and i, i is the row index. So what he does is he first locates whether the given element is less than this or greater than this. If it is not so, what do you do? It has to line up to the for loop. Yeah, so after the for loop, what happens next? So you just have to announce that the number is not there. Fine, okay. Now if he locates a particular row, okay, the point is that after doing this you will come back again here. And you will continue with the next i. No, I will just display the resultant. If you find it, if you don't find it, then you will come back again here. The next i will be searched again and you will search again in another array. If you come here only the element lies in the i, you know? No, this only says that it lies between this number and this number. It still does not guarantee that it will be there. This condition can be met by multiple rows. You get the discussion that is going on here. What he is doing is he is currently searching for a given element s, whether it is greater than this and less than this. If it is so, then the element s is likely to be in this row. It is not guaranteed that it is here. Then he does a binary search, that binary search either might find s or might not find s. If it finds s, then he will terminate. But if he does not find s, he will have to come back here. Luckily the way his program is written is it will increment i and it will go to this point. Because the element may be in the next array. There may be another row which may still hold the same condition that this element is s is greater than this element but s is less than this. Have you generally got the gist of how to approach this problem? But you will have to take care of these nuances. If you find an element, you have to break out completely. If you do not find an element, you still have to go to the next row and search. And you have to declare a failure only after searching all the nl. Thank you. I would expect you to actually complete this program. I will be putting up a program which you can execute. I will give you the execution timings just to indicate how a merged salt is better than selection salt. The real number of seconds it takes. But there is only one question. Just one more minute. If you want to test this program for really large data like 50,000 elements in an array, where will you get this data from? Think about this problem. The example that I will give which I will upload will illustrate how you actually can do a test data generation automatically. All right. Thank you.