 This video is going to introduce Amdahl's law. We saw a preview of this in one of the examples where, as the workload shifted from the CPU to memory, the additional computational units provided less and less of a benefit. Amdahl's law, though, is really focused on the limit that you get by simply adding additional computational units. Since problems are not fully parallelizable, there will be some portion of the problem that won't be improved by adding additional processors or cores. This will constitute a minimum amount of execution time that must be consumed to solve the entire problem. The equation formalizes the idea of speed-up and parallelization. But since there's always this 1-a chunk of time spent on non-parallelizable code, there will be a limit to how much improvement we can get on a task. We can extend the idea to other concepts, though. Amdahl's law is just as applicable if you focus your efforts on improving the performance of a limited part of a program. You may get great results at first by focusing on one problematic function, only to discover that you start getting diminishing returns as you continue to improve that function. This will also allow us to understand how we should focus our efforts. We'll be able to see which parts of our computation are consuming the most resources, and would benefit the most from additional improvement.