 In this video, I'm going to cover graphics pipelines. What is a graphics pipeline? The graphics pipeline is an important concept in computer graphics and refers to the process of different stages that is used to turn the instructions on a computer into graphics on a screen. Each API has its own pipeline, but the stages used are similar, as are the final results. So in order to explore the different stages within a pipeline, we'll take a look at DirectX 11. Here's a simplified flow chart of the stages of the pipeline. As you can see, the different stages are input assembler, vertex shader, hull shader, tessellator, domain shader, geometry shader, rasterizer, pixel shader, and output merger. I'll now give you a little bit of explanation on what each of these stages is doing. The first stage in the pipeline is the input assembler. This can be thought of as the building block stage. At this stage, the geometry is built so that it can be rendered out later in the process. The second stage is the vertex shader. Here, code is run that operates on each of the vertices. This generally means applying shaders. These vertices have come from the input assembler stage. The third, fourth, and fifth stages are optional and deal with tessellation. Hardware tessellation is the process of taking the original geometry and increasing or decreasing the level of detail by adding or removing faces. This means that if the hardware is powerful enough, shapes can be made smoother and more detailed in real-time. Here's an example of tessellation in use. In the image on the left, you can see that tessellation is active and the roof tiles are much more detailed. But in the image on the right, without tessellation, everything's a little flatter. The sixth stage is the geometry shader, and this is also an optional shader stage. Geometry shaders operate on entire shapes such as triangles and not just on vertices like with the vertex shader. At the geometry shader stage, geometry can be created or destroyed as needed depending on the effect the developer is trying to create. Uses here include the generation of particles to create effects such as rain or explosions. The seventh stage, the rasterizer, determines what pixels are visible through clipping and calling geometry. Culling is where any geometry that does not fall within the dimensions of the rectangle representing the screen, known as the view frustrum, is discarded. Clipping is where any geometry that lies partly out of the view frustrum is clipped and reshaped with new triangles. This stage then sets up the pixel shaders and sorts out how they will be applied. The eighth stage is the pixel shader stage. Here the geometry is taken from the previous stages and the pixels, often referred to as fragments because pixels are what is output to the screen, are shaded to comprise the shapes. The output merger is the final stage in the pipeline, and this is where everything from the previous stage is all come together. The final image is built using the data calculated from the earlier steps and is then sent on to the screen. The staggering thing about this complicated process is that if a game is running at 30 frames per second, then the data needs to be sent through the pipeline 30 times every second in order to achieve the smoothness of play that we come to expect. Okay, so that brings us to the end of another video. Again, I hope this has been useful. If it has, that's what the thumbs up button is for, and subscribe if you'd like to see more of this kind of thing. Okay, that's me done for another one. I'll see you again. See you next time.