 Welcome to this video session. In this video session, we're going to look at Python implementation of optimal storage on tapes, which is a problem in algorithm design and analysis techniques. So let us see what outcome has been planned for this session. The learning outcome for this session is planned such a way that at the end of this session, students will be able to provide Python implementation for optimal storage on tapes problem. So in order to demonstrate the Python implementation, we will adopt a live hands-on approach where using PyCharm ID, I will be solving the problem by writing the Python code for the optimal storage on tapes. For that, as a prerequisite, you need fundamentals of Python language and it's also pertinent that you have a latest stable working edition of PyCharm ID. Before we go for any hands-on on implementing algorithms in Python, it's very crucial to know the time complexity of containers and collections in Python. Because when we implement a pseudocode algorithms, what we do is we write a pseudocode for algorithms and we calculate the time complexity of algorithms based on the pseudocode which we write or in general the algorithmic representation. But often when we implement those algorithms in languages, for example, we are using Python, but in some cases it could be C, Java or even other than the language. So most developers what they do is they just assume or they rather than saying that they will just presume that a specific operation in Python, for example, maybe it's a sorting or maybe for searching or maybe finding a mini-element in a list or a specific container in a Python, they assume that it's of specific complexity. And most often this always results into a bad program implementation. So it's very important that we understand what are those operations and what's the time complexity of those operations implemented in Python's C Python runtime. So we will go through the standard time complexity manual of Python, wherein it lists you the time complexity of various crude operations on containers and collections from the Python's standard library and then the collection module as well. Apart from that, we'll also look into a module in a Python called his heap queue module which lets me build min heaps and also which also lets me have a priority queue implementation in Python. So we'll also touch that aspect as well and we'll see what exactly will be the complexity of any heap or a priority queue constructed using those models in the Python. So let me switch to these Python's time complexity manual. So now we are on a page where it's a standard Python docs module where the Python implementation, it's a Python's standard module here where you can see here that these complexities are based on the current implementation of C Python. And this is for Python 3 we are looking at. So now when you look at this, what I was talking about as a more important aspect which you need to concentrate when you give a Python implement to this algorithm is these complexities. For example, if you take a list and to the list, if you add an element append at the end of that list, then the complexity of that operation is O of 1. So under the average case. So this matters so much for a theoretical concentration but when you're writing an algorithm for a practical concentration, it's very important to also understand the immortalized worst case complexity but as of now we will slightly skip this. We will more talk about these complexities. So the append is O of 1 and you can see that sometimes when I insert this append at the beginning, then suddenly the complexity is very high. So you can see here pop last. It means that removing last element from the list is O of 1 but popping any of the middle element in the list, popping any of the middle element in the list, it's complexity is O of n. So you can see that in our algorithm implementation, if we do this operation, then we have to consider that operation as O of n, not as O of 1. That's most misconception with developers or the novice beginners who begin with language, they keep into mind that okay, their algorithm and the program has equal complexity, no. It depends upon how well you correlate the pythons or any language's internal implementations with your algorithm's derived complexity. So since our Python implementation of algorithm which we're gonna see the hands on, it depends on Python, it's very important that we look at this chart. So this is for the list and you can see here if you want a double ended queue, then this is the best data structure which lets you append left, append right, which lets you insert and deletion at both the end of the container and that's O of 1, that's a D queue. So this is what is recommended. And one more operation which very frequently we use is sort operation and searching in a list. So when you search in a list, for example, X in a S, then it's also O of n. Finding minimum and maximum of a list is also n, but you can see that surprisingly getting the length of list is O of 1, quite contrary to the major developer perception that finding the length of a list will be an operation. It's a beauty of Python which does give you the length of list in O of 1. So this is set. So you can see that whenever you take a set symmetry difference or when you say whether the element is present in set or not, it's O of 1. So most often we need such operations in algorithm implementation. And these are the dictionaries. As you can see that average skills dictionaries is always O of 1. Whenever you look get item, set item, and delete item, it's important that we much worry about this. But you can see that amortized worst case complexity is O of n. So fine, we'll not give much attention to this, but we'll only rely on these assumption. So iterating dictionary is n and copying the dictionary is O of n. Don't think that creating just a copy of dictionary is one operation that no. Whenever if you create any copy of a dictionary then it is O of n. So by looking at this, I think one of the, another very important thing is sort which we very frequently we use in our algorithm. As you can see for sorting, the complexity is O of n log n. And it's somewhat based on team sort implementation. Very important, note that the sort built in sort whether it is sort or sorted, it's always n log n in Python. So this is for the time class of basic operations. As we have said, we are using another module. We will be using another model heap queue. We may use it. Most algorithms do require use employing priority queue and heaps. So this is a module and this gives you a mean heap representation and it's based on the textbook heap algorithms. So the APS differs from textbook. So it gives you zero index, mean heap based textbooks implementation. And so you can see that this is almost like heap dot sort. So it's a kind of heap, arabic heap implementation given to you. The general complexity of this is n log n. All the standard heap related operations what the retrieval insertion of log n and maintaining heap is n log n. So this is also very beneficial when you want to implement heap in our own specifically when you talk about optimal file merge pattern or some other kind of algorithm which requires to maintain a heap or tree based structure. Speaking here, we are not seeing an exact approach or exact implementation of that standard approach. But what we are saying is we are looking at the Python program which pays a way for us to find the exact solution for this. As you know that we, for example here we might have our songs. Let us consider songs as an example which are stored on tapes. So song one size 30 bytes song two and its size and so on. So we are so some with the proper size. What I need to do is I need to find out the order in which the songs have to be stored so that the mean retrieval or time for the entire programs or the songs stored on tape is minimum possible. So what I do is I define a method, the mean retrieval time which calculate total number of reads given the order you mentioned. For example, you might say that if I store first this will be first program, this is second, this is third and this is fourth. Given you order this gives you the total read operations. So if I give a order as one, two, three, four then what I do is I create all possible permutations of this and I just pass it to this method and by using the permutations I can just pass it to this method and I can find out the mean read operations. So now when you execute this program it will give you the minimum operations which we get for a different order of storage of this programs on tape. So now if I execute this program you should see the various order and the mean read operations which I get. Now if you look here, the minimum read operations you can scan the minimum mean read operations here 66, 65, so on. I think you should get some minimum and least minimum. So this is 38. Let us see if I, okay, here I see 37 which is even less than 38. So this is the minimum of retrieval time and it's only possible for these order which means I need to first store the fourth program then I need to store the third program then first and then second. So now if you look at that this is nothing but it's the order in which if these songs are sorted based on size of there or you can call the length of that songs of program. So that's how you solve optimal storage on tapes. So instead of solving finding all permutations and finding minimum rather than this going for brute force approach what you can do is you can alternatively sort this based on the length of this programs and you can directly pass this order to this method. So for example, this method remains same you need to find the order here instead of doing all the permutation you need to just sort them, sort them and get the order and solve it. So that would be an exact approach to solve optimal storage on tapes. And now if you look at that every time I'm trying to pick the song with the least length so if I want to store first because since I want to minimize number of read operations I will always pick up the program which has the least number of size so that I tend to read less number of bite size. So you can see here too you have one similar greediness kind of approach so I think you can guess the approach the algorithm design approach this problem can be solved. So that's it for this video section in the hands-on session. Let's move to the next slides which we have to look into. Okay so we have seen the program in the PyCharm IDE so as a quick recap or a quick reflection what do you think? What is the algorithm design paradigm we have used in implementing optimal storage on tapes? You can pause the video at this moment you can analyze the approach and then you can guess the answer. The answer for that is that it's the greedy approach. So that's it for this video. Thank you.