 Welcome to this video session. In this video session, we're gonna look at Python implementation of fractional knapsack problem. Let us see what outcomes have been planned for this session. For this session, at the end of this session, the students who are watching this are expected that they will be able to provide Python implementation for fractional knapsack problem. So before we move to Python coding, we will be using PyCharm ID and it is the expectation that the audience who are watching this video have some fundamental knowledge of Python language and they have a latest stable PyCharm ID. Community edition is enough for our needs. Yeah, before we go for any hands-on on implementing algorithms in Python, it's very crucial to know the time complexity of containers and collections in Python because when we implement a pseudocode algorithms, what we do is we write a pseudocode for algorithms and we calculate the time complexity of algorithms based on the pseudocode which we write or in general, the algorithmic representation. But often when we implement those algorithms in languages, for example, we are using Python, but in some cases it could be C, Java, or even other than a language. So it's most developers, what they do is they just assume or rather than saying that they will just presume that a specific operation in Python, for example, maybe it's a sorting or maybe for searching or maybe finding a min element in a list or a specific container in a Python, they assume that it's of specific complexity. And most often this always results into a bad program implementation. So it's very important that we understand what are those operations and what is the time complexity of those operations implemented in Python's C Python runtime. So we will go through the standard the time complexity manual of Python, wherein it lists you the time complexity of various crude operations on containers and collections from the Python's standard library and then the collection module as well. Apart from that, we'll also look into a module in a Python called as HIPQ module, which lets me build min HIPs and also which also lets me have a priority queue implementation Python. So we'll also touch that aspect as well and we'll see what exactly will be the complexity of any HIP or a priority queue constructed using those models in the Python. So let me switch to these Python's time complexity manual. Okay, so now we are on a page where it's the standard Python docs module where the Python implementation, it's a Python's standard module here, where you can see here that these complexities are based on the current implementation of C Python. And this is for Python 3 we are looking at. So now when you look at this, what I was talking about as a more important aspect which you need to concentrate when you give a Python implement to the algorithm is these complexities. For example, if you take a list and to the list, if you add an element append at the end of that list, then the complexity of that operation is O of one. So under the average case. So this matters so much for a theoretical consideration but when you're writing an algorithm for a practical consideration, it's very important to also understand the amortized worst case complexity but as of now we will slightly skip this. We will more talk about these complexities. So the append is O of one. And you can see that sometimes when I insert this append at the beginning, then suddenly the complexity is very high. So you can see here, pop last. It means that removing last element from the list is O of one, but popping any of the middle element in the list, popping any of the middle element in the list, it's complexity is O of N. So you can see that in our algorithm implementation, if we do this operation, then we have to consider that operation as O of N, not as O of one. That's most misconception with developers or the novice beginners who begin with language. They keep into mind that, okay, their algorithm and the program has equal complexity. No, it depends upon how well you correlate the pythons or any language internal implementations with your algorithms derived complexity. So since our implementation, Python implementation of algorithm, which we're gonna see the hands-on, it depends on Python. It's very important that we look at this chart. So this is for the list. And you can see here, if you want a double ended queue, then this is the best data structure which lets you append left, append right, which lets you insert and deletion at the both the end of the container and that's O of one, that's a D queue. So this is what is recommended. And one more operation which very frequently we use is sort operation and searching in a list. So when you search in a list, for example, X in a S, then it's also O of N. Finding minimum and maximum of a list is also N, but you can see that surprisingly, getting the length of list is O of one, quite contrary to the major developer perception that finding the length of a list will be an operation. It's a beauty of Python, which does give you the length of list in O of one. So this is set. So you can see that whenever you take a set, symmetry difference or when you say whether the element is present in set or not, it's O of one. So most often we need such operations in algorithm implementation. And these are the dictionaries. As you can see that average case dictionaries is always O of one. Whenever you look get item, set item, and delete item, it's important that we much worry about this. But you can see that amortize worst case complexity is O of N. Okay, so fine. We'll not give much attention to this, but we'll only rely on this assumption. So iterating dictionary is N and copying the dictionary is O of N. Don't think that creating just a copy of dictionary is one operation that no. Whenever if you create any copy of a dictionary, then it is O of N. So by looking at this, I think one of the, another very important thing is sort, which we very frequently we use in our algorithm. As you can see, for sorting, the complexity is O of N log N. And it's somewhat based on team sort implementation. Very important, note that the sort, built-in sort, whether it is sort or sorted, it's always N log N in Python. So this is for the time-close of basic operations. As we have said, we are using another module. We will be using another module heap queue. We may use it. Most algorithms do require you employing priority queue and heaps. So this is a module, and this gives you a mean heap representation and it's based on the textbook heap algorithms. So the API differs on textbook. So it gives you zero index, mean heap based textbooks implementation. And so you can see that this is almost like heap dot sort. So it's a kind of heap, arabic heap implementation given to you. The general complexity of this is N log N. All the standard heap related operations, what the retrieval insertion of log N and maintaining heap is N log N. So this is also very beneficial when you want to implement a heap in our goal, specifically when you talk about optimal file merge pattern or some other kind of algorithm which requires to maintain a heap or tree base structure. So this is a program to solve fractional knapsack problem. As you know that a general fractional knapsack problem tries to minimize or sorry, it tries to maximize the maximum profit for a given sack capacity. So in order to solve that, we have to sort the items in the ratio of profit to weight. So this is how we do it. And I have just taken a custom function to define sorting logic, but later I have replaced it with the lambda one. So you can either use a lambda or you can directly give the sorting logic here. That's anything works. But I have preferred here to write a lambda. So the let M denote the maximum sack capacity and let the total price be zero. That's what the total price we have to calculate at the end. So the list of items is represented in this way. For example, this is one item and here it is p comma w. It means it's a profit of item one and weight of item one, profit of item two, weight of item two, so on this goes. So I'm representing in form of p comma w. So now as you all know that if you want to solve it, solve the fractional approach and if I want to maximize the profit, I always tend to take an object which has the highest amount of profit to weight ratio. So in Python, what I do is I use a sorted method. I want to sort this. Yes, I'm sorting it. And now since the sorting order is slightly different from the natural ordering of this list elements, what I do is I provide a key to it and I make sure the key is nothing but item of zero by item of one, which means the profit by weight. So appropriately you can sort it based on the ratio of profit to weight. And I want this in descending order. So I give reverse key is equal to two. This makes sure that this item is sorted. In descending order of profit to weight ratio. And once I done with that step, only thing you are left with is you need to take each item from the sorted item and make sure you calculate the M. You recalculate the, keep on re-changing the SAC capacity so that you will entirely, either you end up exploring all the sorted items or maybe you will end up exceeding the SAC capacity. So either of that conditions you will be coming out of this and then you can print the total price, which is possible, which is maximum possible total price under the given conditions of M is equal to SAC 15 and given item list. So now if you work for this problem, let me execute this. So if you execute this, you get 26 and 26 is the right answer. So later on you can pause the video and you can work on this and you can verify if it's the right answer. So this is implementation of fractional NAPSAC program and you can try to guess what exactly is the algorithm design paradigm being using that. And once again, I somewhat, I want to maximize so I look at some greedy like approach. You can think on this and you can try to guess what exactly is the design algorithm paradigm used for this problem. Okay, now that we have seen the Python implementation for fractional NAPSAC problem, can you guess what design paradigm we have used for this implementation? You can pause the video at this point and try to guess the answer. But the right answer for this question is that we have used greedy approach. So that's it for this video.