 In our shader programs, we can have an optional geometry shader, which is executed after the vertex shader, but before the fragment shader. And if we also have an optional tessellation shader, that's executed in between the vertex shader and the geometry shader, so the geometry shader is executed after. Whereas the vertex shader is executed for every vertex, the geometry shader is executed for every primitive. So in the case that we're rendering triangles, the geometry shader runs once per triangle. What we can do in the geometry shader is modify the vertices, or possibly even remove them and add more vertices, add more geometry. This does make it possible to perform tessellation to a degree in the geometry shaders, but they're not really appropriate for that task. Instead, for that purpose, you should use tessellation shaders, which we might cover in a later video. For your geometry shader, you have to declare what type of input primitive you're dealing with, whether they're points or lines or triangles. And you also have to declare what type the output will be, whether it's points, lines or triangles. And so your geometry shader can say, take in points as primitives and spit out triangles, but having declared that it takes in points and spits out triangles, you can't in that shader then say, take in something other than points or spit out something other than triangles. That is fixed in the shader's type declarations. In example 9.1 geometry shader houses, our geometry is actually just defined as four points, and the geometry shader is taking those four points and generating triangles, producing the four house shapes you see here. So now in the code, we're dealing with three shader files. The third is the geometry shader here. And our vertex buffer data here, this points array is simply two dimensional clip space coordinates for the points. And each point has an associated color. Top left will be red, top right will be blue, bottom right will be green, and the bottom left will be yellow. When this gets rendered, it's simply an ordinary call to draw arrays, like we've seen many times before, except note this time the primitive type is GL points instead of triangles. And our vertex shader is not doing anything interesting, simply passing along the color value into a color output and simply setting GL position to be the clip space coordinate of our x and y. And our fragment shader likewise is not doing anything interesting, it's just taking an input color and setting that to our output frag color. But now we have this geometry shader in between, which at the top note, we've declared it to take for input here with the word in. We're saying the shader expects points as input. And then for the output, it's triangle strips with a max of five vertices. The idea of a triangle strip is that the first three vertices represent a triangle, but every additional vertex represents an additional triangle sharing an edge with the previous. So if my triangle strip has three vertices, I just have one triangle. With four vertices, I would have two triangles, and with five vertices, I would have four triangles. So triangle strips are a way of representing multiple adjacent triangles using fewer vertices, rather than having to specify three for every single triangle. So here with max vertices equaling five, we can have up to three triangles. If we produce only four vertices, we would have two. With three vertices, we'd have one. And if we produce fewer than three vertices, then you won't have any triangle at all. We'll have no geometry being emitted, which is legal. Sometimes you want to discard geometry. Maybe for some of these input points, we don't want to produce any geometry. And that is an option. Anyway, you'll see that we'll be emitting an output color from this geometry shader, but as input, we're getting an array of VEC3s. This corresponds to the VEC3 output of the vertex shader, where note, it's not an array. It's received as an array input in the geometry shader because the geometry shader is run only once per primitive, and those primitives may be made up of multiple vertices. In this case or not, in this case, our inputs are points. Each one just being a single vertex, and so when we index color, we're only going to access index zero. But if our inputs were, say, triangles, then we could also access indexes one and two of the input arrays. Because in that case, we'd be dealing with the output for three separate vertices. In this case, though, it's just the one. And to access the GL position emitted from the vertex shader, there's a special struct array called GLN, which is always defined for us. We access its first element and get its GL position. And this is the GL position emitted from the vertex shader. So pause here is our point. Now to produce a vertex, we call the emit vertex function, but first we set GL position for that vertex, and also for all the outputs, we set a value for that output. So here we are emitting four vertices, four calls to emit vertex. Each one of them has the same value for color, so this is only being set once here. But then each of them has a different position. All of them offset from the input point to form a rectangle, the base of our house. And then for the last vertex, that's going to be the point of our house's roof. And for that, we set a point in the middle of the rectangle and above. And we'll set this top vertex to be white. Now that we're done producing all the vertices of our output primitive, we call end primitive. Let me quickly demonstrate again what the output looks like. We have these house shapes, each defined by a point, center of the rectangle, and the top point of the roof notice is white. Now if we were to emit just four vertices, and here I'll rebuild on the program. Now we don't have the top of our roof, but the geometry shader still runs. It's okay to emit fewer than the max five. If instead we were to emit only two vertices, rebuild, and run the program. Well, now we're not seeing anything because the expected output is triangle strips, and two vertices don't make a complete triangle, so we see nothing. Example 9.2, geometry shader exploding. Gives us something a little more interesting. We've loaded the NanoSuit model like we did before, but now in the geometry shader what's happening is, given a time value, we are translating the triangles up each of their own normal vectors and then translating them back. So they're just shifting in and out each one along their normal vectors. And you can see here how many triangles there are in the hand as they fly apart. It's a bunch of little triangles there. So what's going on in the C++ code here is nothing we haven't seen before. We're passing time in as a uniform to the shaders. In the vertex shader, the text coordinate needs to be passed as output to be received by the geometry shader. And for geol position, we're actually not transforming all the way into clip space. We're not factoring the projection matrix in here. We're only transforming into view space here. We'll wait to factor in the projection in the geometry shader. For the text coordinate part, so we're getting the text chords as an array because remember for the vertex shader it's one output, but then in the geometry shader that becomes an array. In this case, because our inputs are triangles, this array will have three values and our outputs in this case are also triangles. Notice it says triangle strip as output, but with max vertices of three. So it's either going to be one triangle as output or none. We can't simply just say here that the output is triangles because for whatever reason, it was decided that there are only three kinds of valid output points, line strips and triangle strips. So if you want to output triangles, you have to output triangle strips. Anyway, so we're going to be emitting three vertices and for the text coordinates, we're just passing it along verbatim. As for the geol positions, well first we need to compute the normal of the triangle and this is done by computing the cross product of two edges of the triangle. Here vector A is one edge and vector B is another. We compute the cross product, normalize and that gets us our normal. Now for each vertex, we call this explode function passing in one of the vertices plus the normal and this function computes an offset vector from the normal multiplied by the sign of time plus one so that it's never negative value and then divided by two just to slow it down. Without this division, it was just oscillating too fast and I thought it was hard to look at so I divided by two. We add the direction offset to the position and return that so we get our modified vertex but lastly we need to take that and transform from view space into clip space so we multiply it by the projection matrix which is passed in as one of the uniforms. So that gets us the geol position for that vertex and then do the same for the other two vertices. So this is one potentially useful thing we can do in our geometry shader. We can compute the normals which we can't do in the vertex shader because computing the normal requires us having all three vertices at once. Alternatively, we could just pre-compute all the normals and pass them into the vertex shader as vertex attributes and so we can get the same effect without a geometry shader but in some cases it's more convenient or more useful to compute the normals on the fly so in those cases, we'll need a geometry shader. In example 9.3, geometry shader normals we are rendering the nanosuit as just a regular model like we've done before but then we're also rendering with a separate set of shaders including a geometry shader we are rendering all the normals as these lines. In the C++ code, we're just drawing the nanosuit as usual but then we're drawing the normals with a separate shader program. So there's nothing new in the rendering of the nanosuit so we won't look at those shaders but looking at the normal visualization shaders first the vertex shader we're taking as input the positions and normals of the nanosuit triangles just like we do for rendering the nanosuit and we're going to set glPosition to be the clip space coordinate for the vertices of the triangles just like always we're also outputting the normal transformed into clip space we compute a normal matrix that'll transform our normals into view space like we've seen before using inverse and transpose of the mat3 rather than the mat4 in this case though we want to output the normals in clip space not view space so we're going to apply the projection transform and then in the geometry shader we're taking triangles as input three vertices as input but then we're outputting lines so we say here line strip and we're outputting three of them one for each of the vertices so it's max vertices of six each line is two vertices so they can be entirely disjointed they don't have to be connected at all anyway so what we're going to do here we're going to generate the three lines one for each vertex calling this generate line function passing in an index for the first vertex it's just the glPosition verbatim but then for the second vertex it's that same position but offset by the normal multiplied by some magnitude which here is just this constant of 0.2 and note that we call n primitive for each pair of vertices if instead we were to call this only at the very end let me rebuild here what we'll see are a bunch of connected lines for each triangle we have five line segments defined by six vertices all of them connected to each other so because that's not what we want we should make sure to end the primitive after each pair as for the fragment shader we just want our lines to always be yellow so we set frag color to yellow here last thing I'll say about geometry shaders is that they do notoriously have performance problems at least for NVIDIA and AMD hardware Intel apparently actually has some advantages there even though otherwise their GPUs are far inferior so it's commonly recommended to avoid geometry shaders unless you're doing something very specialized but generally in the context of say a demanding triple A game they're used quite rarely because of these performance issues tessellation shaders on the other hand have more performance benefits and so those are used more commonly in some cases for similar purposes we'll look at tessellation shaders in a later video