 The idea of normal mapping is that we define an arbitrary normal for all points of our surface using a texture map. These normal vectors are typically encoded in the texture such that the r-component corresponds to the x-axis of the vector, which is the same as the u-axis of our texture. The g-component stores the y-value corresponding to the v-axis of our texture. And the b-component stores the z-axis, which is perpendicular to the u-v plane. So from our perspective, looking at the texture, it's pointing towards us. Like as usual with RGB values, the numbers run in the range of 0 to 1, but because we want the x, y, and z values to potentially negative, we split the range such that 0 is stored as 0.5, positive values are greater than 0.5, and negative values are less than 0.5. So this means, say, that positive 1 is stored as positive 1, but negative 1 is stored as 0. The sum effect is that when normal maps are displayed as images, like we see here, they tend to typically be this bluish-purple color, because in a typical normal map, normals with the x, y, z values of 0, 0, 1, meaning normals pointing straight out, tend to predominate, and 0, 0, 1 gets encoded as 0.5, 0.5, 1, which is this bluish-purple color. For vectors pointing to the right, they're going to have a larger red component, and so they're going to look reddish. And for vectors pointing up with a large y value, they're going to look greenish. Normals pointing up and to the right are going to be yellowish, because R and G together make yellow, and so forth. Anyway, having defined a normal map, we can get results like this. Here I can hit space to actually toggle it off and on, so I'll toggle it off, this is without normal mapping, this is with it on. You get the sort of illusion of more 3D-ness than is actually there. This is really just a flat triangle, but you get some illusion thanks to the normal mapping. The question when rendering with a normal map is what coordinate system are the normals relative to, because a vector only has meaning within a coordinate system. Well, if for each point on the surface we define a normal, and also a tangent and bitangent of that normal, together forming three orthogonal axes, then we have a coordinate space that gives meaning to the normal that we read from the texture. So the question is, how do we select these three axes? Well, the normal, most obviously, can just be perpendicular to the surface of the triangle, but then the tangent and bitangent, less obviously, we generally want to have them run in the same direction as the U and V axes. So in this simple case, our UV cords are 0, 0, 0, 1, and 1, 0, and the tangent vector is running along the U axis, and the bitangent is running along the V axis. Note this means then that the normal tangent and bitangent are the same for all three vertices, and in fact, for all points on the surface of the triangle. What about, though, the cases where our vertices are not perpendicular to the surface? As we've seen, in some cases we define our vertex normals to not be the same as our face normals. Instead, we sometimes average a vertex as normal between the triangles that share that vertex, so as to effectively create a smooth shading between the surfaces. The problem here, though, is that if the normal is not perpendicular to the surface, then the normal can be orthogonal with the tangent and bitangent if they respectively lay along the U and V axes. We can also have a problem if the U and V coordinates map in a skewed fashion into the texture, like seen here in the bottom right. If we define our tangents and bitangents to lie upon the U and V axes, then in these cases they're not going to be perpendicular to each other. The simple solution to both these problems is to simply disregard them, to only use normal mapping in cases where the vertex normals correspond to the face normals, and where the U and V coordinates of our 3D triangle do not correspond to a skewed grid on the texture. In the example we just saw, this was the case. The vertex normals of the triangle all corresponded to the face normal of the triangle, and the UV cords were unskewed. For now, we'll put aside these possibilities and revisit them at the end. So to apply normal mapping, we're going to need to define a normal tangent and bitangent for each of our vertices. Finding the normal of a surface is something we've already discussed. You find the cross product between two edges of your triangle, but then with that normal, we need to find the perpendicular tangent that runs along the U axis, and likewise we need to find the bitangent that runs along the V axis. To do this, we can observe that in UV space, we can say that two adjacent edge vectors of the triangle, here we'll call them edge 1 and edge 2, these vectors correspond to the sum of T and B multiplied by deltas of U and V. Here for example, edge 2 is the vector from P2 to P3, and if we multiply unit vector T by U3 minus U2 and add that to unit vector B multiplied by V3 minus V2, well that gets us E2. And it's the same deal for E1, the vector from P2 to P1, and the deltas between P2 and P1. So knowing all these values in UV space, if we solve for unit vector T, we should get 1, 0, and if we solve for unit vector B, we should get 0, 1. That's not very helpful until we realize that these ratios should hold also for the 3D triangle defined in local space. And so if we use E1 and E2 of local space, but use the same delta values, well then we can solve for vectors T and B in local space. The T and B vectors in local space that we solve for won't be unit vectors, but they will be pointing in the right direction, and so we can just normalize them. So if we go through a few steps of proof, we end up here at the bottom with a formula involving matrices, and then translating this formula into code, we can find the tangent and bi-tangent. We compute the two edges by subtracting two of the vertices, find also the delta UVs of the two edges. F here is the non-matrix part we saw in the formula, and if we expand out the matrix multiplication, we can find the x, y, and z of the tangent, the x, y, z of the bi-tangent, and we normalize both at the end, because again, they're not necessarily going to come out as unit vectors. And as we established, we're considering only the simple case where the vertex normals correspond to the face normals and our UV coordinates are not skewed, and so the normal tangent and bi-tangent are all going to be the same for every vertex of a triangle, and so here when we define the attributes for each of the vertices of our triangle, the position values are different, the text chords are different, but the normals, tangents, and bi-tangents are the same for each of the three vertices. Once we have the normal tangent and bi-tangent for a point on the surface, we can read a normal value from a normal map and rotate it into the coordinate system of those three axes. Our axes are originally defined in local space, and we're going to want a rotation matrix that transforms our normal vectors from the normal map, transforms them from the so-called local tangent space that they're defined in into the world tangent space of our normal tangent and bi-tangent. Handily, it turns out that we can construct a rotation matrix that gets us into a rotated space by taking the three unit vectors of that space and simply plugging their x, y, z values into a three by three matrix. Whichever of the axes is meant to be the new x-axis, they plug into the left column, whichever is meant to be the new y-axis is plugged into the middle column, and whichever is meant to be the new z-axis is plugged into the right column. In this use case, the tangent is meant to be the x-axis, the bi-tangent is meant to be the y-axis, and the normal is meant to be the z-axis. So once we have our t, b, and n defined in world space, we plug them into a matrix, and that'll get us a rotation matrix that'll transform normals of the normal map from their local tangent space into world space. So what we'll want to do in this vector shader is output a so-called tbn matrix that we can then use in the fragment shader. We're taking in the normal, tangent, and bi-tangent as input to the vector shader. They need to get transformed into world space. So to transform vectors from local space into world space, as we've seen before, we take our model matrix, reduce it down to mat3 to effectively strip out any translation, and to properly handle non-uniform scaling, we have to apply transpose and inverse, getting us this model vector, which we then apply onto our tangent by tangent and normal, normalizing each of them, and then we can just plug them into a 3x3 matrix, which we call tbn. Note also here we're outputting the vector normal, but that's only used when we toggle off normal mapping. If we're doing normal mapping, we only need tbn. Looking now at the fragment shader, when we compute the diffuse, ambient, and specular, we do so just like we've always seen before. The only question is how normal gets computed. We have this uniform boolean called normal map, which when false, we disable normal mapping, and so we just use the normal supplied by the vertex. But when true, we read the normal value from the normal map, and this value recall is encoded in the range 0 to 1, but we want to rescale it to the range negative 1 to positive 1, so we multiply by 2 and subtract 1. Lastly, we apply the tbn rotation matrix, getting us from local tangent space into world space, and we normalize to make sure it's a unit vector. In this scenario, it should already be a unit vector, but because of imprecision it may drift a bit, and so it's generally better to renormalize. Anyway, now we have our normal from the normal map, and so when we do our calculation, it's not a normal interpolated between the vertex normals, it's an arbitrary normal from our normal map. And that's how we can get normal mapping, which gives us the illusion of complex geometric detail on the surface, which otherwise is actually really just flat, but thanks to the lighting calculations from the normals, it looks like it's much more complex than it actually is. As you can tell when I disable here normal mapping, you can see that the crevices of the bricks look much, much flatter. Now, the process I just showed you works, but they cut down on the amount of work done in the fragment shader, which again is typically the biggest performance bottleneck in rendering. We can use a little trick. Instead of transforming our normals of the normal map, instead of transforming them from their local tangent space into world space, what if instead our other vectors, the light position, frag position, and view position, what if they were transformed from world space into the local tangent space? If we invert our transformation in the vertex shader, then here in the fragment shader, that means our normal here doesn't have to be multiplied by the TBN matrix, because it's already in the same space as the other vectors. So in fact, we don't pass the TBN matrix into the fragment shader at all. Instead, in the vertex shader, when we create our TBN matrix, we get its inverse, which for the special case of an orthogonal matrix, we can compute using transpose instead, which is cheaper. And so TBN here now is the rotation matrix that gets us from world space to local tangent space, whereas before it was the other way around. And so our outputs from the vertex shader are the frag position, light position, and view position, rotated into the local tangent space, and those are the outputs passed along to the fragment shader. But in the end, we get the same result. Again, they'll understand that we are counting here only for the cases where our vertex normals correspond to our face normals, and our UV cords are avoiding any skew. The question then is, does our process still work in cases where the normals tangents and bitangents are not actually orthogonal, and if not, can we somehow tweak our process to accommodate such cases? Well, honestly, the limited information I've seen regarding this topic seems to be quite mixed. Some sources suggest that we simply should just avoid such cases, other sources imply otherwise, and I really cannot adjudicate between them. For the case of skewed UV cords, that can always just be avoided if you author your models carefully, except then there is the problem that with animation, triangles end up stretched, in which case you're going to get a skewed UV mapping. As for vertex normals, if we're arbitrarily defining for each point its own normal, then I'm not sure why we need averaged vertex normals to get smoothing over the triangle edges. But then it does seem non-ideal that we can't share vertices between adjacent triangles, because they can't have the same normals tangents and bitangents. It also seems quite non-optimal to have the same normal tangent and bitangent repeated three times for each triangle. So again, the question is, will our process work out if for the three different vertices of our triangle, instead of using the same normals tangents and bitangents, instead they can each have their own? Do we then still get correct results? Again, unfortunately, I've gotten mixed information from different sources, so I can't really give you a good answer.