 In this section, we're going to be looking at instruction level parallelism. Essentially, what we'll be trying to do is run multiple instructions concurrently, not just having them pipelined so that each instruction uses a different part of our hardware at a different time, but actually running concurrently. So we might have several instructions being fetched or decoded all at the same time. We're going to look at two different strategies for doing this, static and dynamic multiple issue. In a static multiple issue processor, we have essentially longer instruction words. We will break that long instruction word up into a series of issue slots, each of which can hold an instruction. And in the extreme case, we have what is called a very long instruction word, where we really just have a huge number of instructions all stacked into the same macro instruction. In a dynamic multiple issue processor, on the other hand, we keep the same instructions that we're used to, but our processor will decide how many it should pull in and run in any given clock cycle. The main type of processor that implements dynamic multiple issue is called a superscalar processor. And primarily, we use Thomas Sulu's algorithm to handle all of the processing in this type of architecture. Both of these strategies are used in modern processors. Static multiple issue tends to be used more commonly in specialized areas, things like GPUs like static multiple issue. In contrast, general purpose consumer CPUs tend to be superscalar processors, and they'll implement Thomas Sulu's algorithm. So in this section, we'll be walking through how all of these work, what their general concepts are, and we will start to dig in a little bit to how they work, but without focusing too much on any one given architecture.