 So welcome to the afternoon session. I hope you had a nice lunchtime or whatever, depending on your time zone. I started programming when I was 11 years old. Really old, old, and today we have Coutilier. He is seven years old. Is that true? Yeah. Welcome to you, Paisen. And I'll give you the word about computational complexity. My name is Coutilier, and today we're going to talk about computational complexity. As you already know my name, I am a seven-year-old computing explorer and problem solver. And I am a seven, and here on the screen you can see me installing Python when I was around five and a half. And I hold the Guinness World Record title for youngest computer programmer at the age of six. I knew different kinds of algorithms even before I started coding. And I have a special love for puzzles and solving problems. So due to lockdown, I got bored at home, so I discovered how to slide on the stairs. And I used a piece of cardboard, a mattress, or some pillows. By the way, the pillow thing was unsuccessful. I advised you to not use pillows, but mattress was rocking. Here's a few second videos me sliding. I like it. I still had lots of time, so here's how I used it. I used online computing resources from YouTube and courses from Stanford, MIT. And I learned about artificial intelligence from IBM. And in fact, I am an IBM certified AI professional. I also explored data science series and learned how to do data analysis and visualization using Python from IBM. But my most important achievement of the year is in the next slide. So, so here on screen isn't... So about last week, I got my most important academic certificate. And here on the screen is my year two progress report for my primary school. I did pretty good and I'm excited to move to year three. I also love doing mathematics, swimming, origami, texting, reading, and playing with my friends and younger brother. So what are algorithms? Algorithms are basically self-instructions. And they can be very simple, like just eating an ice lolly. So number one, open the freezer. Number two, take out... So number one, open the freezer. Number two, take out the ice lolly. Number three, close the freezer. And number four, eat the ice lolly. And they can also be very hard, like making a map of the Andromeda galaxy with one of those early telescopes, which could only look as far as the Andromeda galaxy. The universe also has an algorithm to expand. We just don't know where. But they are all around us and we just keep discovering them. Now since we didn't make the universe, let's forget about that for a bit. But we made computers, machines, AI technologies. Let's talk about algorithms using programs. So now, to understand computational complexity, you first have to understand bigger notation. And bigger notation, actually, all of computational complexity is nothing but. Forget about small and think big. And it's simple because it's just a way to represent something. But what is that something? Let's look at some examples to understand that. So let's say I'm in a library to find my favorite Sherlock puzzles book. Then I could just ask the librarian and he or she will give it to me. That would be all just one time complexity because I'm doing only one step. And that one step is asking the librarian. And also, the time taken will obviously depend on how fast the librarian is, but then I relate to space complexity, which is how much memory it takes, which we're not going into in details. In this example, let's say there are no librarians. But I still need to find my book. Then I could start searching one by one in all the racks until I find it. That would be all just one time complexity because I'm doing only one step, which is asking the librarian. Obviously, the time taken will depend on how fast the librarian is, but then I relate to space complexity. Wait, now I think. Because I'm actually doing n steps. And at max, I mean when the book is at the end of the library. So I'm actually doing n steps. So in total, you have all the end time complexity even though that might not be exactly n steps. In this example, let's say the books are this time arranged in alphabetical order. So a to z. Then, then linear search is to applicable, but there is a more efficient method. And it is called binary search. So how binary search works is by keeping on splitting the books into half until it finds it. And that is a divide and conquer algorithm. I mean that is exactly what a divide and conquer algorithm does. Divide, conquer, combine. And also it's also one of the divide and conquer algorithms. Okay, anyway, let's just go back to binary search. So, Sherlock puzzles book starts with an s, the middle letter in the alphabet is m, and s comes after m. So we just need to look for box m to z. And then keep on splitting the books into half until you find it. That would be all about log n time complexity because log is the number of times you can divide it by two. Divide it by two, divide it by two, divide it by two. I mean divide it by two without getting a one. I mean a one in the integer place. Now we know what big our notation is. I told you it's simple. We also know what algorithm means. The steps the program takes. I mean a combination of the steps the program takes. But the efficiency of the algorithm depends on how many steps you're taking if we're talking about time complexity or how much memory it takes if we're talking about space complexity. Do we calculate time complexity or space complexity? Let's understand that with a simple program. So here on the screen you can see a simple program to calculate some of the numbers to n. This algorithm involves one step to initialise care zero. Then in the for loop there are three operations running n times. Then finally the last line of the program is another operation. So in total you have one plus three n plus one operations. But when we talk about big numbers like a million, billion, trillion, quadrillion, quintillion how do small numbers make any difference? They don't. We can ignore all the additive constants which will make the expression three n and then remove the multiplicative factors as well which will make the expression only n. And then you have your time complexity as order of n. I'm going to say the same thing I said a few minutes ago. Yeah, a few minutes ago. Time complexity is nothing but forget about small and think big. No big-eyed presentation should be done without showing this graph which is showing the time complexity of different algorithms and it gives the indication of how much proportional time it may take if the input size grows. So as you can see in the graph order of log n looks almost constant and shows the most efficient time complexity where I just want to say before I proceed to order of n that the input size doesn't increase with the running time complexity. I mean the input size doesn't increase with the time so the growth factor is the same. So now let's just move on to order of n. So order of n. So that is also quite efficient but order of n log n that's fairly but I cannot say the same thing for n to power a constant and I definitely can't say the same thing for n factorial or a constant to power of n. But that's only true when we're talking about big inputs. When the inputs are more small then n squared works the same or even better than order of n but order of log n performs quite poorly for small inputs. In case you're wondering my mom helped me draw these graphs initially I came up with this one. So now let's talk about other kinds of asymptotic complexity. So let's go back to our library example where I search in rack by rack the linear search one. So here there could be n possibilities but if I narrow them down then I could find in the first track itself and that is best case which is called big omega notation and or I could find it I mean in this case it has omega one or I could find it in the middle rack somewhere and that's average case which is called big theta notation and in our example it has three tough n because after the way we calculate the time complexity and then also another possibility is that is that we can't... another possibility is that we can find in the last rack or do not find that all in the whole library and that is the worst case which is big O notation which we already looked at so it's all your phone. So there are also big O notation there's an upper bound big omega notation is a lower bound and then big theta notation is an average bound there are also two little notations little omega and little O they are little omega and little O they are rough estimates of the upper or lower bounds but they are not used as much as the big notations though so I got this data from one of the books about algorithms and it shows some interesting figures as you can see any algorithm with time complexity n factorial becomes useless after an input more than 20 for 2 to the power of n that's 40 for n squared that's a million still much better than the other two but still bad but n log n is quite good and n is really good but log n that is the best only 0.03 microseconds for the input 1 billion some examples of log n are binary search, Fibonacci search and exponential search some for radix or counting some buckets all these are nearly all your friends but not exactly linear search is exactly so for n log n our Tim's saw made some quakes or heaps and some others divide and conquer our algorithms too some for n squared above all sort selection sort insertion sort and stew sort for 2 plus n it's the brute force for calculating the n Fibonacci number for n factorial a great example for that would be Heaps algorithm which is used for generating all the possible permutations of n objects we even have algorithm which have order of infinity time complexity and one of these is Bogo sort the way it works is by finding a I didn't say another I said a even though that could have improved its time complexity to 2 to about m a random permutation of the list if it finds out that the list is not sorted so it creates through the list if the left element is less than the right element then it finds a random permutation now let's look at some algorithms insertion sort is the first one so the way it works is by marking the first element as sorted then it goes through the array I mean everything else in the array then it puts it where it should be in the sorted sub array as you can see 10 doesn't move because it's greater than 5 it's more than 7 because I said sorted array ok anyway so after the whole sorted sub array after the whole sorted array is in the sorted sub array then you just need to print out the sorted sub array you should use insertion sort the data is extremely small in fact it's even better than bubble sort for extremely small data but you shouldn't use it if the data... I mean another way you could use it is where the data is extremely large and almost sorted because it takes only one iteration of the list in its first case which is omega of m but you shouldn't use it if the list is unsorted to a large extent and also the data is... and also the data is big and it does not perform well when the list is in reverse order it's time complexity is order of m squared and space complexity is order of 1 now made sort it keeps on splitting the data into half until you have equal sequences of length 1 and then merge them together and then you have a sorted array the way merging works is... so look at the first element of both of them so for 5, 10 and 2, 7, 8 5 and 2, 2 is smaller so put that in first 7, 5, 5 is smaller so put that in next then 10, 10 and 7 7 is smaller put that in next and then we want to 8, 8 and 10 8 is smaller so put that in next and then... and then you're done with the secondary now you just need to add on everything that's left so and 10 is left so just add that on you should even make a sort when the data is large but not too large for 3 plus plus about more than 10 bar of 6 in size and for Python that's 10 bar of 10 size I think more specific I think it's more like 2 bar of 64 actually yeah 2 bar of 64 now Tim sort so Tim sort is basically a combination of the first two that's why I put the insertion sort and merit sort at the beginning so Tim sort first analyzes list picks which was better insertion sort and merit sort and that's how it works you should use Tim sort when almost yeah on a date you can use it everywhere but but not more than 10 bar of 64 C plus plus and 10 bar of 10 in size in Python again because it uses merit sort okay it's time complexity it will draft n log n and it's space complexity it will draft n and now bind research you should use it when the data is sorted and it doesn't work when the data isn't sorted it's time complexity I already told you log n and it's space complexity it will draft 1 and interpolation search the only difference of it from bind research is that it divides into unequal parts so what happens is that so what happens is that it goes more at the end if the element is closer to the last element it goes more at the start if the element is small at the start basically I mean at least it does that or sorted and uniformly distributed lists or arrays which one you prefer I mean whatever one you prefer and it's actually feature of log log n well and it's and that case is when the data is sorted and uniformly distributed you should use it when it is and you should use it when it isn't sorted or uniformly distributed it's time complexity it will draft n and space complexity it will draft 1 okay so that's the end of my show I have a few accounts where I love to connect with you and I also have a YouTube channel called Cortelia Concepts where I post videos about how to solve different computing problems you can search about the keyword Cortelia Concepts Python though it doesn't work when you search Cortelia Concepts see you in Q&A thank you very much for your wonderful talk thank you very much for your wonderful talk there are some questions let me start with the first one did you implement different sorting algorithms in Python and measure time using for example the time module I've done it a long time ago for BOGO Thought and BOGO Thought and Merid Thought I think and maybe even and maybe in session thought too so the next question is how did you learn Python what do you recommend to get started so I learned Python so it was a bit like a mistake just a good mistake so it's when my dad it all started when my dad gave me this book about computing and I just loved it so much I finished it all in only one day and also and also I actually bought some basic computer programs and also I recommend that I recommend that you should first start by reading books and then when you're okay with that then start practicing and solving problems and and I recommend and I also recommend to solve puzzles because that might help you too yes that surely helps by the way I started programming too with a book that's the best way so the next question is what is the hardest algorithm you have come up personally and what challenge was there let me actually the person who asked should do that but I think is maybe you programmed something that was really really hard and maybe you had some problems or something when programming and how did you solve it challenges because my mom might help me to solve some problems to solve my problems okay that's great so let's look if there are more questions no at the moment there are no more questions coming in so thank you very much again it was really a pleasure and I think you are the youngest Euro-Python speaker ever so thank you very much