 Alright guys, so I'm going to explain to you guys how Asymptotic complexity works because a lot of people don't actually understand time complexity as much as they think they do They just see like a for loop and be like, oh, this is o of n or whatever Oh, there's the two for lips is o n squared or whatever. So first of all, let's say I give you an algorithm Let's say you found out something really cool. I don't know right now. There's the coronavirus, right? Let's say you found out a way to track every single person's corona virus So many people out there with corona virus, right? You're trying to track them down, right? And you finally develop developed this novel algorithm that just Able to track down all these people Well, then how can we measure the speed of this algorithm on other types of devices, right? Because like one device could be way faster than the other one could be processing way faster and like faster CPU time might take Some slower machines might take slower, right? So what do we do? Well, what do we do first? We pretty much we're gonna do is that we're gonna give you a bunch of data points Okay, and then we are going to calculate the time it takes for every single data point So let's say I give you a certain number of data points. This is called n whatever the input side That's like 10 data points to 20 data points Whatever and you are going to calculate the time for every single one you measure it You measure the time you measure how long it takes for this algorithm to run And then what do we do we graph every single data point with it whatever time? So like let's say we might get something like this. Alright, maybe your algorithm does something like that Okay, this is after when we graph everything. Well, we can't really analyze this, right? We can't find the speed of this, right? So what do we do? We approximate it, right? We approximate it. We create a mathematical approximation function using your algorithm So then it might look something like that. Let's call this f of n. It's connect all the dots It might look something like that. This function might have some crazy crazy I don't know ridiculous 5 sine square of I don't know 3.2 pi x plus. I don't know some weird starting value Tan inverse of whatever pi or something like 24 or whatever that this Function might have some ridiculous crazy thing and what can we do with this very hard to Analyze some crazy constants that you have interfering with this this algorithm. So what do we do? What computer scientists do is that they find a boundary, okay? They're going to bound whatever function between two lines. Let's say this is your function We're going to bound it between here and we're going to bound it between here So if we bound this function between these two lines It is much easier much much easier to analyze it and when our input size Becomes way too much if I input size goes one million users or a thousand or hundred thousand or a billion people It's much easier find the time it takes to do this You might be able to change your algorithm and do whatever it wants and let's call this first point c1 G of n c1 is just one constant of this Function g of n that we bounded it to and then what we do is we try to find a function So you could bound it to such that there's two exists two constants as it's between so it's like the the exact same Function as the original one this and this except there's just a different constant So then if we're able to bound it between these two lines exact same function just with different constants It is much easier to analyze if you think about it. So what do we call this? We call this theta notation So f of n is now bounded between theta of g of n So what does that mean? It means that it is within a constant factor of a tightly bounded of f of n So that means that g of n is a asymptotically tight bound for f of n with this first of all We know that g of n cannot be negative. Okay, if it's negative that doesn't make much sense Even this function just doesn't make much sense because if you input million users your time It takes to run is going to increase if it's negative you that means you either run on a super computer or something or some it It gets faster while there's more data points to process that doesn't make any sense So we know that this has to be non-negative. Okay, it has to be positive That's what theta g of n mean theta g of n is equal to f of n if there's exists c1 c2 Positive constant and n zero n zero is all these a starting point where all these connected to each other Okay, such that it's between zero c1 g1 zero c1 g and Zero and f of n so f of n is between these we're all and that's greater than or equal to n zero This means such that by the way, you don't know the notation theta g of n is equal to f of n if there exists two Constants c1 c2 that bounds it between it c1 times g of n is less than or equal to f of n And f of n is less than or equal to c2 times g of n. Okay, that's what this data notation means, okay? So and you could figure out all the the values of what it is and but let's look back at this equation Let's look at these constants as you could see if you were to continually graph many many data points that they figured out You could actually see that most of these values Constants that engine inverse pi and sine square 3.2 pi those don't matter anymore as we reach in more infinity infinity what really matters is that the constants that bounds G of n and c2 a g of n. Okay, that's what really matters So we could actually throw those away so these constants don't matter anymore So we actually could just call it theta of n and the reason why it's theta of n is because there exists two constants That is going to bound n towards f of n function. Okay So f of n is is approximate to theta of n, right? So like in this case our g of n would be just n a function of one straight line and that's where it bounds this function There exists two constants that we could bound are the regular straight line towards this f of n of this crazy ridiculous function Okay, so that's that that's what this means. Okay. Now. Let's talk about o notation so let's say we graph the same time function as before I give you a bunch of input values and you graph how long your algorithm takes and then your Function looks something like this. Okay, if I could find another function that bounds it above an upper bound function of this Function f of n then I could analyze this right so before here We've tried to found two constants that could bound between this function g of n But this time we are just trying to find one function that could bound this function towards So let's call this let's call the function look like that. Okay, and this function will call it g of n And it's just one constant multiplied well We call g of n asymptotic big o notation big o notation o of g of n is going to equal to f of n if there exists a constant c 1 multiplied c 1 and And zero and zero just like a starting point here We'll call it where they both cross and zero such that zero is between f of n and f of n is less than or equal to c G of n for all and Is greater than equal to n zero. Okay, so when we say this is a big o of n square or big o of n it means that there exists a a Like like if we say the time complexity is big o So this is big o notation big o when we say something is big o of n. It means that there exists a constant multiplied by n that bounds this Your algorithms time function above it. Okay, so there exists a function Whatever this function is as it approaches infinity, right that multiplies by a constant bounds it above your algorithms Time function. Okay, that's what they mean. So this big o means is the upper bound limit for this function And as you can see here initially your big o function might actually be not as efficient as your Normal function as you can see here because like some of this might actually have increased faster or Slower than your original function. So big o notation might not actually be best case To work with sometimes for small values for I'm saying for small value, right? Because it's like the rate of increase. Yeah, it's basically what the big o notation means You can also say like f of n so when okay, so when f of n is equal to g of big o of g of n You can also write it as is a member of g of n o of g of n o of g of n So what they're saying is that here what they mean is using set theory It's just a subset of its beta g of n is a subset of O of g of n because it's a small part of it This is bounded above and this this is squeezed between two constants So a small part is gonna be a a subset of the larger part. Okay, that's what that means Big o notation means it's there's an upper bound tight limit your time complexity is so that your time of your algorithm That's what it means. There's also another case where we could think about omega notation I think I think my good notation is a lower bound omega notation I'll draw it out an omega notation looks like this. This is your time function and this this is omega So it's bounded below if you have a lower function technically you could write any function That's actually bounded below if you could just find a constant for this means and Most but most of the time we don't really care about this because like we really care about the like the lower bound doesn't really mean much Sometimes it does. I'll just write the mathematical notation g of n is equal to f of n if there exists c 1 n 0 such that 0 less than c g n Which is less than f of n all n greater than equal n not and not it's just like where they intersect Yeah, that's what this big omega notation means So another thing we could think about is a little old notation little o notation So sometimes you find a function that is actually not as tight as your f of n function Let's say the bound to n square is equal to big o of n square, right? This is a tight function, right if you are to bound it between the line like an upper bound We know that this is 2n square is bounded by n square, right? So there exists a a constant c such that c times n square is above 2n square, right? So that that is a tight bound, but sometimes it's not as Asymptotically bound so like you could say that 2 of n is equal to o of n square You could write that I mean, it's not it's not wrong. There might exist a function like a time constant multiply by n square that is going to be Above 2n, right? You could you could write that it's not tight. It's not tight. It's not like a tight bound, right? What do we do? We use a little o notation. So little o o notation is To describe a upper bound that is not tight like it's not going to squeeze between it This is how you mathematically to find it g of n f of n Positive c greater than zero there exists exists and n not the beginning one greater than zero such that Zero is less than equal to f of n There's less than c g of n for all And that is better than equal to n not so example 2n is Little o of n square, right? There exists a constant multiply by n square that is Above 2n and it's not a tight like it's not a tight bound, but you cannot say 2n square of this like this doesn't make sense This this this function is bounded tightly n square is bounded tightly the 2n square like there It does exist a constant such that like that is bounded above 2n square and it is tight, okay? so yeah, that's all of the Asymptotic notation You might say that the big o notation of big o and little are similar But the main difference is that the bound holds for some constant c but in the little bound The boundary is holds for all constants so in so this is similar to big o notation But so the main difference is that f of n We say f of n as big o of g of n times g of n right this bound This bound is true for some c c greater than zero but for For little o we say it's bounded for all c. It's a little o of g of n this is Bounded f of n is less than the constant c times g of n For all constant c greater than zero in little o notation f of n becomes insignificant relative to g of n as it approaches infinity Excuse me, but yeah, that's basically it. We can talk about the rate of growth for each function All right guys, so if we look at this picture here This picture shows every single big o notation of every single running time of a function of big o of f of n So if you look at this notation We see that n factorial is the worst and the reason why is it because it the it grows very quickly Like the time it takes like the rate of change the time For the time for the values grows way faster in terms of time compared to the other Functions so like this time grows way too of way faster compared to O of c of n o of c to the end Okay, so o of c to the n means in exponential function, right? So like o of 2 to the n or 3 to the n right finding like like brute-force algorithms that that uses a lot of Time complexity right it uses it grows as if algorithm grows really fast and Let's look at this this this graph o of c of o of n to the c Okay, this is a con a polynomial a polynomial graph So o of n to the c is like o of n square and third and fourth and those values are like Using like a certain number of loops O of n squares like using two for loops if you like Traverse all the way through both of them. That's like o of n squared o of n cube might be like use three four loops O of n fourth stuff like that like exponential not exponential polynomial polynomial time o of n to the c Then here we have o of n log n and that's like that's like the time complexity of quicksort sorting algorithms Good sorting algorithms o of n. This is a good time complexity of it uses the It's a linear. It's like one one pass takes through is a good time complexity Uses one for loops of very efficient as it approaches larger and larger then we have o of log n, which is Also very good very good It's better than o of n because like it doesn't the time it takes does not approach that The time it takes doesn't approach that much when it goes to infinity and then we have constant o of one And that's that's the best because it only takes Takes like I don't know Takes very constant time so like that means when we say constant time it means like it runs like five seconds six seconds Seven seconds like no matter how much the input size is it will still run five or six seconds, right? Five seconds six seconds, and there there's some algorithms are like that right like where if you just turn it on It just automatically is on right like algorithms are constant. It just uses a certain number of space. So yeah, that's this That's all the the graphs of the time complexities Of o of n and I explain all the time complexities. I hope you guys enjoyed this video great. Come subscribe. I'll check you guys later