 Hello, 231 students. This video is going to introduce you to this week's topic, which is sorting algorithms. So we have studied searching, which is kind of the other big algorithm we wanted to look at. Sorting, on the other hand, is, well, you kind of have a, you probably have a good inkling of what this is. Sorting is the process of rearranging elements in a collection into a specific order. Usually, they're natural ordering. So we mean, you know, numerical order from least to greatest alphabetical order, that sort of thing. That's all sorting is, right? And so sorting has natural definitions for things like numbers, for things like letters, and even for things like strings. So, but for things like user defined classes, like the notion of a book or a person. Well, what does it mean to sort such a thing? So there's no natural ordering there, which means you will have to define it, and we'll talk a little bit about what that means. But really sorting is putting things in order, right? Now as computer scientists, we are caring about efficiency, right? As a computer scientist, you have to implement algorithms that solve problems. And sorting is a very common problem in computing. Now, we care, when we talk about efficiency, what are we talking about? We are talking about the big O, the big O of n, the big O of n squared, whatever it happens to be. And of course, as always, we want things to be as efficient as possible, okay? So when we talk about sorting efficiency, and what I want you to have in your head here is you're going to be sorting a collection or a list of things, right? So you've got like an array based list in Python that you've initialized with a bunch of characters. Now you want to sort it, okay? So how do you think about the efficiency of sorting a list like that? There are two considerations. The first is, how many comparisons do we make between elements to figure out their right order, okay? So the number of comparisons we're doing, kind of like we did in search. In search, we cared about the number of comparisons. Here, again, in sorting, you also have to do comparison to figure out, hey, is this thing less than this other thing? The second thing you care about is the number of movements you make to rearrange the elements in collection, okay? So not only do we compare, we have to move things around. Maybe they get swapped. Maybe we put things to the end of the list. How we do the movement and how we do the comparison determines the efficiency of sorting, okay? So Python, as you may know, already provides you with two kind of baked in sorting methods. The first is sorting a list and then the second is the built-in function of sorted. So why don't you fire up PyCharm and just run these so that you can see what it does. The slides are posted, so you can go to the slide and just kind of grab the code. And once you do that, in the meantime, I'm going to grab the code and swap over and get started. But I'd like you to follow along with me as well in your own code. All right, so grab this code, come over here to PyCharm. You can see I already have it pasted in here. Let's make this a little bigger so you all can see it. Okay, so there are two sorting algorithms of kind of baked into PyCharm, not algorithms, sorting implementations. The first is we've got a list, right? And if you look at this list, it is not in any sort of order. And then we call list dot or our variable x dot sort, okay? So you'll notice we're not capturing a return value here. We are just calling x dot sort. So what do you expect to see here? Well, I would expect to see that sort this call. Let me go ahead and run it. It does sort it. And this sort is called an in place sort. In place sort, right? So the value of x is rearranged. It is changed around. And so after this call, assuming the sorting is successful, x will be in order, right? So the other option is this sorted function. Okay, so I've got my same list here. I've put it in a y variable now. And I'm saying z gets sorted y. Okay, so this is not an in place sort. What happens here when you call the sorted function, right, is that Python copies y, sorts it, then places the sorted copy in z. Okay, so this is beneficial because y now will retain its original order, whereas z will be sorted. Okay, so let's run this and see it. So here's why it's in its original order. z is now sorted. Okay, z looks a lot like x. Now this is beneficial because there may be some cases, there may be some use cases where you're writing a program to process some data, where you do want a sorted version of the data, but you also need to keep the original order preserved. Maybe it's useful for some other thing. Maybe it's like, I don't know, maybe it's a queue, and you don't want to rearrange these people necessarily, but you do want to see, say, you know, now that you've got this sorted, you can find out pretty easily what the smallest value is. That's going to be the value at the beginning of the list. You can find what the biggest value is. You can find the median value in the middle, right? So there are cases where you may want to preserve the original order of the thing, which y does, but then you also want a sorted copy of it. Okay, so that's the difference between sorted and calling dot sort on a collection. This is an in-place sort. It rearranges and kind of screws this up. Sorted gives you a copy of it. All right, but this sorted returns a value, so you have to store it somewhere. If you don't store it somewhere, you know, why is still going to be unsorted? All right, so let's go back to the slides for a minute. So as I mentioned, the Python has two built-in sorting methods, and we just talked a little bit about what the difference between those things are. These two sorting methods both depend on the elements and objects you are sorting have the less-than operator defined, right? So the way the comparisons work and we don't see the exact algorithms under the hood is they're kind of looking at each item in the list and in some way saying, hey, is this item, you know, less than this item, is it less than this item, right? So the comparisons are all based on less-than. So for integers, if you're sorting a list of integers, well, integers have a natural meaning for what less-than means. For characters and strings less than has to do with alphabetical order. But if you have a custom class, like say a bank account, and a bank account has a variable in it for first name, last name, account number, and balance, it's up to you to define what one bank account less than another means. Maybe you will find it useful to sort bank accounts by the account number, or maybe you would find it useful to sort bank accounts by the balance. You can do either, you specify the logic by defining or overriding this underscore, underscore, LT, underscore, underscore. LT, of course, stands for less than, right? So you can tell Python how to sort your custom classes. All right, so here's the question you're no doubt asking. If Python has built-in sorting, why are we studying it? Well, first, sorting is one of those fundamental problems in computer science. It's a lot like searching. If you really understand the complexities of sorting and searching, you've got, you know how to think like a computer scientist. You know how to look at the trade-offs of different algorithms. The other reason is that there are many different sorting algorithms out there, and they are different for different scenarios. And we're going to talk about some of that, okay? Some sorting algorithms are faster than others. They've got better time complexity. You know, some are big O of N log N. Some are big O of N squared. Some are big O of N, depending on the data type. Some sorting algorithms use more memory than others, right? So you may have a fast algorithm, but it takes extra memory in order to be fast. And so if you have enormous data sets like gigabytes or terabytes, memory inefficient algorithms are not appropriate. And then also you may need to sort things in a fixed memory environment, like an embedded system or a little tiny device like your refrigerator. So there's lots of different sorting algorithms. We're going to expose you to some in this course. The ones that we will talk about this week, and I'll have a different video for each one of these, so you can kind of look at them piecemeal, bubble sort, selection sort, insertion sort, merge sort, and quick sort. So we'll take a look at all of these in future videos. And your goal here, your goal in this kind of module is just understand how these algorithms work. What's their basic principle? And then you also want to know what's their time complexity? What's their big O? Are they what we call a stable sort? And we'll introduce that concept when we get to it. Are they an in place sort? In other words, can they just work in the memory that's allocated to the list or that they need extra space, right? And when are they good and when are they bad? We'll find that certain preconfigurations of the list, like maybe if a list is already mostly sorted, sometimes some of these algorithms perform very poorly if the list is already sorted, but if it's not sorted, it does well, okay? And the thing is, you don't know what state the list is in, or the computer doesn't know what state the list is in when it goes to sort it. All right, so that's our introduction to sorting. We're going to go through each one of these algorithms one by one. And I will see you then.