 Previously, we used image-based lighting to account for the ambient diffuse light from the environment. Now we're counting also for the ambient specular light from the environment. And so you can see here that in the column on the left, the spheres with a lesser roughness, we get these strong reflections from the environment. As we move right, the spheres get rougher, and so the reflections diminish. Now, ideally, we would calculate the specular light coming from the environment in all directions from the point. But of course, we can't afford to do an infinite number of light calculations. For diffuse light, all that matters is the angle between the normal of the surface and the vector to the camera, so we constructed our convoluted environment map, which, for a given vector, effectively accounts for all the diffuse light within the same hemisphere. The specular light calculation, however, depends not just on the normal and the vector to the camera, but also the vector to the light source. And so when we construct another precomputed environment map for the specular light, it can only account for one combination of view vector and light vector. So we can't fully capture a precomputation of the specular light of our environment the same way we can for diffuse light. However, if we do the precomputation just for the case of the light vector and the view vector both matching the normal, then we can use this, as you can see, to get results that are still pretty damn good. I'm not going to explain all the math for this, which gets quite complicated, but I will show you the major pieces and how they fit together. The precomputation part is where things get complicated. How we then use that at real-time to do our rendering is the easy part, so we'll start there. In our physically-based fragment shader, the one we're using for the spheres, we're computing the ambient light for the spheres very much like we did in the prior example, except now for our Fresnel-Schlick approximation, it's this variant that also factors in roughness. And what that looks like is simply where in the original formula we have the value 1 in its place in this modification, we have this expression. We take the max of the VEC 3 of 1 minus the roughness value, but we use the F0 value, the base reflectivity value. We use that as a floor. And what this does effectively is that for greater values of roughness, we get a lesser Fresnel value, which makes sense because for rougher surfaces, you get less specular reflection. So that's our ambient term. Then for the specular, we're actually using two textures. The pre-filter map is a cube map with multiple MIP levels. We sample from it at the reflected angle from the view of the camera, very much like we did when we used environment maps for reflections. This pre-filtered map is basically a reflection map. When you look at a shiny surface, the reflected light is coming from the reflected angle off that surface. And note here we're using texture LOD, level of detail, where the third parameter here from the roughness is specifying the MIP level. The greater the roughness, the higher the MIP level. And as we'll see, the higher levels of the pre-filtered map are blurrier. So that gets us this pre-filtered color, but as we'll see, this is only part of the proper specular calculation. For the rest, we need two other values from this other pre-competed texture, which we're calling the BRDF-LUT as in lookup texture. For a given N.V and roughness value, we get a red channel value, which we multiply with a Fresnel, and then we add to this product the value of the green channel. And to be clear, these values have nothing to do with red and green. We just happen to store them in the red and green channel of this lookup texture. So this all then is multiplied with the pre-filtered color. That gets us our final specular term, which we then add with the diffuse, multiply with the amp-in occlusion factor, and that gets us our ambient light. Now as we're computing those two textures we need, the pre-filtered map and the BRDF lookup texture, but we need to start with our formula for the Cooctaurance BRDF, which looks like this. We're computing the function of L sub O, output luminance, for given point P, and an output vector, which is the vector of the camera. That's omega sub O. It looks like a W, but it's actually a lower case omega. And anyway, that is all equal to this integral. It's an integral because in this case we're computing the light coming from all directions, coming from the full environment. In the integral, we have two terms on the left, KD, that's the diffuse term, and then KS is our specular term, and for our purposes here, that's all we care about. So we're just going to discard the diffuse term, and then just for notational purposes, we'll take the KS times DFG over 4 business. We're just going to here notate that as the result of a function F sub R for inputs P, omega i, and omega o. Omega i is the input vector, the vector from the light. What we're also going to do is split the integral, like seen here, because it's useful to us if we can compute these things separately and multiply them back together later. So it's actually just the left part of the split integral that makes up our pre-filtered environment map. For a point P and an input vector, omega sub i, we have some input light color value. Function L sub i just gets us an input light color from omega sub i, the input light vector. But note, though, it's still an integral, because there's a sum of all the light coming from all directions. And so very much like with our diffuse map, we'll be sampling from many directions, but weight the samples based on their angle, such that samples angled away from the normal contribute less. What's different this time is that we're rendering five different MIP levels for increasing degrees of roughness. The base MIP level is for the smoothest surfaces, and so samples angled away from the normal contribute very little, we effectively get very little blur. But then as we go up the MIP levels for increasing roughness, we want the blur to get stronger, which makes sense because smoother surfaces have more focused, specular reflections, the rougher the surface, the less focused. As for the right side of the integral, that's what makes up the other lookup texture we need, which is called the BRDF integration map. Through a series of steps, which I gloss over here, the right integral here can be split into the sum of these two integrals. In the texture we're generating, the x-axis represents the roughness, the y-axis represents the dot product of n and h. We plug these two values into the formula here, and the integral value on the left we store in the red channel, the integral value on the right we store in the green channel. Again, be clear, this has nothing to do with redness or greenness. It's just convenient to use a texture to store these values, and then when you visualize the results, it looks like what we see here. Handily, this part of the calculation is entirely independent of what the environment looks like. So we can generate this BRDF integration map once and use it for any and all environments. In the interest of time and simplicity, I'll skip over covering the actual code, and instead we'll just look at constructing the pre-filtered map. So our pre-filtered map texture is going to be a cube map. For the sake of precision, we make it 16-bit floating point rather than the default 8-bit, and the resolution is 128 by 128. Because now we're actually dealing with multiple MIP levels, we want the GL linear MIP map linear filtering for modification. So now when we sample from the texture, we get an interpolation between two different MIP levels. We also call generate MIP map so that the texture has the required memory, and then we do the rendering to draw the actual texture. In this inner loop, we're looping six times to render once for each face, and in the outer loop, we're iterating for each MIP level. So for each of the five MIP levels, we have to generate all six faces. So when we specify the color attachment we're rendering into, we're specifying which cube face we're rendering to, and which MIP level, and in the shader, the roughness is denoted through this uniform. So the vertex shader for this is nothing new. It's just a cube we render onto, like for any cube map. But looking at the fragment shader, the local position is going to be our normal value, and as we've said, we can't pre-compute the specular light for all possible views, so we just simplify and assume the view to be the same as the normal vector. We're going to be computing 1,024 samples in the hemisphere around the normal, and for each one there's going to be a weight value and a color that we compute, and in the end we take the total pre-filtered color and divide it by the total weight to get our output frag color. Now when we got samples from the hemisphere for our diffuse light, our samples were just evenly spaced around the hemisphere. Our sample points all just lay on a spherical grid. We could do the same here, but whereas the ambient diffuse light map we generated is very low frequency, it's quite blurry and there's not much detail, for specular light, our output is much higher frequency and so instead of using evenly spaced samples, it's better if we add a bit of randomness, and that's what this important sample GGX function is doing. Given the input normal and a roughness factor, it's returning sample halfway vectors that are random, but not too random. We're using here the Hammersley sequence, which is a so-called low discrepancy sequence, and so instead of getting totally random sample points, like you see on the left here, we're getting sample points that are random, but they're reasonably spread out and not too clustered, so we don't end up with large gaps in our sample pool, which would be non-ideal. Our sample generating function node again also takes in a roughness factor, because for high roughness factors we want sample points that are spread out wide, but for lower roughness factors we want them more clustered. In fact, for an idealized perfectly smooth surface, the specular reflections are all bouncing off in exactly the same direction, so you don't want any spread at all. So, this gets us our random halfway vectors, and using the halfway vector, we compute the vector to the light by reflecting the reverse view vector off the halfway vector. We then compute N dot L and do a safety check to make sure that N dot L is greater than zero. I don't think there are cases where it would ever be equal to or less than zero, but just in case we do the check. And then we add to the pre-filtered color by sampling from our environment map using the light vector, but scaled by N dot L, and N dot L effectively represents the weight this sample holds, so we add that to the total weight. I glossed over how these samples actually generated. That's done in the function up here. I'll leave it to you to look at it. It uses this helper function as well. This is the random number generation used in the Hammersley sequence. This is how we generate the random samples. Now, actually, our solution has two problems. The first one is quite simple. In this image, instead of using the regular environment map as our skybox, we're rendering the lowest MIP level of our pre-filtered map as the skybox instead. And you can see that this scene between the faces is quite visible. This happens because this MIP level is very low resolution and also uses a heavy blur, but OpenGL by default does not sample cube maps across faces, and so we're not getting the proper blur at the edges. We can fix this very simply by enabling TextureCubeMapSeamless, which tells OpenGL to interpolate across cube faces when it samples from cube maps. Another problem is that surrounding bright spots in the environment map, we may get this speckly dot pattern. To combat this artifact, we're going to use MIP levels for our environment map, and then when we generate the pre-filtered map, we use the formula based on roughness to determine which MIP level to sample from. So down here, we're sampling using TextureLOD from the environment map, specifying the MIP level, which is computed by this formula, which requires using the normal distribution function that we use in our physically based rendering. I'll let you work out the details, but this is all about calculating a MIP level to sample from. So this is how we get our properly constructed pre-filtered map, and so we can get very nice results like this example, where now we're bringing in the albedo and metallic and roughness and ambient occlusion from textures. For some of these materials, like this green grass ball or this brick ball, the speckly reflections are not very evident, but on the yellow ball, obviously they're very prominent, and on this orange ball, you can see like say here, you can see that it's picking up that window over there. In some cases, the effect is quite subtle. It really does ground the objects in the scene. I definitely glossed over a lot of details in this example, so I strongly recommend you take a look at the original article.